Sunday, 19 May 2013

Monitoring System Performance For An SAP Application

Before performance testing a live application, we would first need to understand the performance goals and requirements. By obtaining and analyzing the existing performance metrics, we get a thorough idea on the system performance and this can be used for further benchmarking purpose. And these data are extremely important if an upgrade activity has been planned in the future with which the existing performance is to be compared with. Crystal clear SLAs are to be defined in the requirement gathering phase itself and this needs to be agreed upon by the stake holders. Given below is the standard procedure for extracting the relevant data to analyze and set up the performance SLAs for an existing SAP GUI system before any major upgrades.

CCMS Monitoring
Extracting performance metrics and logs from an SAP System

  • Getting the preliminary data for performance testing is relatively easy with SAP systems owing to the inbuilt capability and reporting facility that SAP comes with. Before deep diving into these features, it is important that we understand some frequently used SAP terminologies. For instance, we are aware that any live SAP system will have a central “Computing Center Management System (CCMS)” which can be used to monitor, analyze and distribute the workload of clients and to display the resource usage of system components.  From CCMS it is possible to monitor a CPU host system, the DB, OS and SAP services, i.e. we can obtain CPU utilization rate, average workload for last few minutes and so on
  • There are several inbuilt monitors available with SAP: Workload monitor, Global work load monitor, Operating system monitors, Database monitors etc. To obtain statistical data for the ABAP kernel, workload monitor may be used (ST03) and global work load monitor (ST03G) may be used to display statistical records for entire landscapes (SAP R/3 and non-SAP R/3 systems).Database monitors are used to obtain the KPIs of the DB system. The SAP official documentation covers the configuration and utilization of these monitors
  • For analyzing the performance of a system, it is good to start with workload monitoring, especially the Response Time Distribution. In the workload monitor, we have different views for workload analysis (check official documentation). Transaction ST03 will fire the workload monitor and we can select the particular instance and duration to which we need the response time distribution from the workload tree. The workload distribution is obtained by selecting “Response Time distribution” in analysis view type
  • The output area contains three tab pages with the following characteristics:


  • In addition, workload monitor can be used to display the no of users working on an instance, workload distribution, transaction response time details and their memory utilization, spool requests volume and much more. If we are concerned about workload distribution amongst individual service types, we might need to use global workload monitor (ST03G)
  • Operating system monitors and Database monitors are useful when analyzing the performance of the OS and DBs (alert monitor – RZ20 may be used for the same). Below is the standard  monitoring architecture diagram
  • Most commonly used SAPGUI monitor transactions are ST03N, OS-Monitoring ST06,  buffer related Monitoring ST02, DB Monitoring (ST04), user distribution Monitoring (ST07)
Source: SAP community forum and official documentation

Monday, 29 April 2013

Backup for SAP

This Blog talks about the ways and the significance of backup.

Storage is energetic to any computer system. It is doubly vital to an ERP system which serves as the memory for the business. Lose that memory and you’re out of business.

Backups are copies of all the important data on your system taken and preserved in such a way that you can recover your data no matter what happens. Making backups and being sure you have good ones is a best business practice. Typically an SAP system will have one or more storage administrators to take care of storage and backup systems. However as manager of the SAP system it’s important that you understand the basics of backup because it is so intimately connected with running a successful SAP system

Basically, ‘backup’ refers to three different things, only one of which is truly backup. There is short-term backup, which preserves a copy of the document for a short time, typically a week or two. There is true backup which saves a copy for a year or so. Then there is archival backup which saves important data permanently – or at least for five years.

Friday, 11 January 2013

FIX Protocol develops standards for European consolidated tape

In a move billed by the standards-setting body as a key milestone in the development of an EU-wide consolidated tape, Fix Protocol Limited has released a set of standards for the consolidation of trade reports and market data for European equity markets.
Market Data For European Equity Markets

The recommendations from the group aim to identify where the trade was issued, in what currency, and where it was executed, down to the nearest microsecond.

Tuesday, 20 November 2012

Oracle R12 Applications Using LoadRunner


The Challenge
We recently load tested our first Oracle R12 release (All modules for nationwide and international wide of Oracle ERP R12). The company was upgrading to R12 from 11.5.8 largely for performance reasons.
We knew we’d be “cutting new ground” with LoadRunner on R12. This became evident with our first testrecord-and-playback, which failed even after finding and fixing all the missing correlations. We raised a ticket with HP (SR# #4622615067), and with their initial help, step by step we overcame all the nuances of coaxing vugen to record successfully, and then creatively working around its inability to recognize the full set of identifiers for a new java ITEMTREE object.

Configuring Oracle Unified Directory (OUD) 11g as a Directory Server


I used Oracle Unified Directory (OUD) Version 11.1.1.5.0 during my test deployment locally here. I tried to collect as much information possible in this post for configuration.
Ideally, there are three possible configuration options for OUD:
  • as a Directory Server
  • as a Replication Server
  • as a Proxy Server
Directory Server provides the main LDAP functionality in OUD. Proxy server can be used for proxying LDAP requests. And Replication Server is used for replication from one OUD to another OUD or even to another ODSEE (earlier Sun Java Directory) server. You can my previous posts on OUD here and here.
In this post, we will talk about configuring OUD after installation as a Directory Server. You can read about OUD installation in my previous post here.

Monday, 19 November 2012

HP DIAGNOSTICS


Overview
Identifying and correcting availability and performance problems can be costly, time consuming and risky. IT organizations spend more time identifying an owner than resolving the problem.
HP Diagnostics helps to improve application availability and performance in pre-production and production environments. HP’s diagnostics software is used to drill down from the end user into application components and cross platform service calls to resolve the toughest problems. This includes slow services, methods, SQL, out of memory errors, threading problems and more.

Performing Manual Correlation with Dynamic Boundaries in LR

What is Correlation: It is a Process to handle dynamic values in our Script. Here the dynamic value is replaced by a variable which we assign or capture from the server response.

Ways to do correlation: There are two ways to do this Correlation.

They are as follows:

  • Auto-Correlation: The Correlation Engine in LR Package captures the value and replaces it with another value
  • Manual Correlation: Understanding of the Script and its response is highly needed to do this. It is bit complex to do Manual Correlation sometimes but this is always the preferred method to handle Dynamic Values in our Script

Usually the Manual Correlation is done by capturing the dynamic value which is present in between the Static left and right Boundaries.

Objective: The intention of this article is that to give a method which will be useful when we wanted to capture and handle Dynamic Values when even the Left and right Boundaries are also dynamic.

The Solution can be much simple, Instead of determining the boundaries to the String we can actually use Text flags.

Before Getting into the Topic we should know about the Text Flags:

Text flags are the Flag used just after the text with Forward Slash.

Some of the commonly known and used Text flags are:

  • /IC to ignore the case
  • /BIN to specify binary data
  • /DIG to interpret the pound sign (#) as a wildcard for a single digit
  • /ALNUM<case> to interpret the caret sign (^) as a wildcard for a single US–ASCII alphanumeric character

Case 1: Digit Value

Suppose the response data is the string literal, but the issue is that the left boundary is changing every time; you get the left boundary as axb, where x ranges between 0 and 9, as follows:
a0b=Boundaryrb
a1b=Boundaryrb
a2b=Boundaryrb
——–
——–

a9b=Boundaryrb

We can capture the desired string by putting the following correlation function in place, using the /DIG text flag in combination with Left Boundary:

web_reg_save_param (“Corr_Param”, “LB/DIG=a#b\=”, “RB=rb”, LAST);

The corresponding place, which you expect to be dynamically filled in with a digit, should be replaced by a pound sign (#).

If there are multiple digits, we can use ‘##’.

Case 2: Boundary is String and case sensitive

web_reg_save_param (“Corr_Param”, “LB/IC/DIG=a#b\=”, “RB/IC=rb”, LAST);

Case 3: A Place to be filled either by a Digit or a letter

web_reg_save_param (“Corr_Param”, “LB/ALNUM=a^b\=”, “RB/IC=rb”, LAST);

HP Ajax TruClient – Overview with Tips and Tricks

Overview

  • In LoadRunner 11.5, TruClient for Internet Explorer has been introduced. It is now possible to use TruClient on IE-only web applications.

Note: This still supports only HTML + JavaScript websites. It does not support ActiveX objects or Flash or Java Applets, etc.

  • TruClient IE was developed as an add-in for IE 9, so it will not work on earlier versions of IE. This version of IE was the first version to expose enough of the DOM to be usable by a TruClient-style Vusers. Note that your web application must support IE9 in “standard mode”.
  • Some features have also been added to TruClient Firefox. These include:
    • The ability to specify think time
    • The ability to set HTTP headers
    • URL filters
    • Event handlers, which can automatically handle intermittent pop-up windows, etc.
  • Web page breakdown graphs have been added to TruClient (visible in LoadRunner Analysis). Previously they were only available for standard web Vusers.

Tips and Tricks

NTLM authentication -

Scenario: Some applications when accessed on Mozilla, demand NTLM authentication. If these steps appear while recording,   they don’t get recorded. Hence while replaying, due to the absence of these steps; the application fails to perform the intended transactions.

Solution: To avoid a situation in which an application asks for NTLM authentication while recording and replaying, one has to specify the application as a trusted NTLM resource. To make that, follow these steps.

  • Open the file “user.js” located in “%lr_path%\dat\LrWeb2MasterProfile”.
  • Locate the preference setting “network.automatic-ntlm-auth.trusted.uris”.
  • Specify the URL of the trusted resource as the value of this setting.
  • Save the file “user.js”

These changes are done only where the VUgen is used to develop the script. These changes get saved with the script and apply on different machines during load tests. 

Disable pop-ups during recording -

Scenario: The occurrence of unwanted pop-ups creates hurdles during script development.

Solution: To disable the pop-ups, we can do it by following the below mentioned steps –

  • In the Firefox address bar, enter ‘about: config’. Click ‘I’ll be careful, I promise’ tab
  • In the filter field, enter disable_open_during_load
  • Right click on ‘disable_ open_during_load’ and select ‘Toggle’. The value changes to ‘false’
  • Record initial Navigation step again
  • Your pop-ups will be disabled

Displaying the value in a parameter or variable -

Scenario: To understand the value that gets stored in a parameter while replaying the script.

Solution: This can be achieved using alert () function.

Example:

var x=”Good Morning”;

window.alert (x);

Calculating number of text occurrences -

Scenario: Scripting of most of the modern internet applications with number of dynamic features demand this requirement. Be it to check the presence of a text on the web page or to count the number of tickets generated in the application during run time, calculating text occurrences and using this count with right logical code.

Solution: In AJAX, using JavaScript functions, we can achieve this objective. This can be done as –

  • Drag ‘Evaluate JavaScript code’ from toolbox
  • In the arguments section add the following code -
    var splitBySearchWord = (document.body.textContent).split (‘Text to search for);
  • Then display the total number of occurrence of the text using Alert () method.
    window.alert (splitBySearchWord. Length);

 

 

Inserting random think time -

Scenario: End-user behavior is unpredictable and as a performance tester, while executing a performance test, our aspiration should always be to reach closest to the real time scenario. Some end users may spend only 2 secs before navigating to the next page, while many others may think for more time. Hence in many test scenarios, it would not be ideal to insert a fixed think time value before a web request; rather one must use random think time in such cases.

Solution: The above scenario can be achieved using advanced JavaScript functionality. They are:

  • From ‘Toolbox’, copy a wait function and paste it before the web request
  • In the argument section, replace the interval value ’3′ by ‘Math.floor (11*Math.random () +5); ‘

The above function will return a random number between 5 and 15.

Math.floor () method rounds a number Downwards to its nearest integer (Eg. The output of code ’Math.floor (1.8); ‘is 1). Hence 11 are used as a multiplication factor so that an integer in the upper decimals of 10 will be rounded to 10.Math.random () method returns a random number between 0 and 1.

Handling browser cache -

Scenario: You may wish to manage the cache handling features of the browser to replicate different types of test scenarios.

Solution: This can be achieved by following these steps -

  • Open the Script under Interactive mode.
  • Go to VUser > Run-Time Settings > General > Load mode Browser Settings
  • Inside the Settings frame display the option Advanced
  • Select the option “Compare the page in cache to the page on the network”; select one of the four values above according to your test requirements

0 = Once per session

1 = Every time the page is accessed

2 = Never

3 = When the page is out of date (Default value)

Conclusion

In Hexaware, we have used TrueClient protocol to record many applications for different clients. Some of the benefits we fruited are as follows – HP TruClient Protocol works with many frameworks like jquery, Ajax, YUI, GWT, JS, etc. Rich internet applications developed on Web 2.0 technologies can be easily scripted and replayed. Script development is interactive with script flow at one side of the window and application opened in the browser at the other. This makes scripting with AJAX TruClient protocol easier and faster. Object identification features minimize the use of complex correlations and make script more dynamic. Thus the scripts become more resilient to back-end changes. Complex client side events like Mouse over, slider bars, calendar items, dynamic lists, etc. can be very easily scripted, customized and replayed. Thus testing cycle is much shorter in case of Ajax TruClient as compared to that with other web protocols. Using AJAX TruClient, API + GUI response time can be obtained, as opposed to other protocols that provide only API response time.

 

XML Optimization through custom Properties

1. Problem Statement:

I am creating a XML file as an output . If my source is empty, is there a way to  avoid the creation of an empty XML file?

Sample output Data with source data :


 

Case 1 : Empty Source – Creation of Minimal XML file

We have to set the following properties of an XML Target at session level under the Mapping tab.

Null Content Representation – “No Tag”

Empty String Content Representation – “No Tag”

Null Attribute Representation – “No Attribute”

Empty String Attribute Representation – “No attribute”

The Output file is as follows

Note: It generates the minimal XML and parent tag. The parent tags are shown as Unary Tag in the browser.

Case 2:  Creation of Zero Byte XML file.

Even though setting all the above property you will get an empty XML file with no data or only with parent tags. If downstream system Like MFT (Managed File Transfer) consumes this garbage file, you will end up with errors while processing.  To avoid these kinds of errors we have to set two custom properties in the Integration Service:

WriteNullXMLFile = No

The WriteNullXMLFile custom property skips creating an XML file when the XML Generator transformation or Target doesn’t receive data . The Default value for this parameter is Yes and. if you set No , the minimal XML document will not be generated and the target XML file size will be of zero byte.

 

2) Suppress the Empty Parent Tag

 

A PowerCenter session with an XML target writes empty parent tags to the XML file when all child elements are null.  This may occur even when the Null Content Representation option is set to No Tag in the session properties.

SuppressNilContentMethod = ByTree

The SuppressNilContentMethod server parameter will suppress the parent tags as well as the child tags when all the child elements are null. To achieve this, set the custom property to “ByTree”.

 

 

ByTree

The ByTree flag suppresses non-leaf elements up to (but not including) the document root, when the entire element chain originating at the specified element contains no data. ByTree flag is always optimal.

For example the Street1 and Street2 values are empty, without setting the property you will get the below output with Street Unary tag:

If you set the Property SuppressNilContentMethod = ByTree the entire Street tag will be vanished.

3) To reduce the Session log size while using XML as Target

XMLWarnDupRows =No

By default; it is Yes, the Informatica Server writes duplicate row warnings and duplicate rows for

XML targets to the session log.

4 ) To reduce the cache file size created by XML target and increase the performance of reading large XML files.

XMLSendChildFirst=Yes

How to set the Custom Properties?

Infa 8.x and Above

1. Connect to the Administration Console

2. Stop the Integration Service

3. Select the Integration Service

4. Under the Properties tab, click Edit in the Custom Properties section

5. Under Name enter WriteNullXMLFile = No

6. Under Value enter No

7. Under Name enter SuppressNilContentMethod

8. Under Value enter ByTree

9. Click OK

10. Restart the Integration Service

Starting with PowerCenter 8.5, this change could be done at the session task itself as follows:

These custom properties would override the DI service level properties.

1. Edit the session

2. Select Config Object tab

3. Under Custom Properties add the attribute WriteNullXMLFile=No and SuppressNilContentMethod=ByTree

4. Save the session

Session Properties:

Advanced Replication Setup for High availability and Performance

In my personal opinion, Oracle leads the market in Directory Product offerings (LDAP Directories). Starting from Oracle Internet Directory (OID), to the latest Oracle Unified Directory (OUD), Oracle definitely provides variety of LDAP Directory related products for integration.

With increasing demand for mobile computing and cloud computing offering, there is a need to standardize LDAP Deployments for Identification, Authentication and (sometimes) Authorization (IAA) services. With a highly scalable, highly performing, highly available, highly stable and highly secure LDAP Directory, these IAA services will be easier to integrate with applications in the cloud or for the mobile applications.

Introduction

Oracle Unified Directory (OUD) is a latest LDAP Directory offering from Oracle Corp. As mentioned in my previous post, OUD comes with three main components. They are:

  • Directory Server
  • Proxy Server
  • Replication Server

Here, Directory Server provides the main LDAP functionality (I assume you already know what an LDAP Directory Server means). Proxy server is used for to proxy LDAP requests (how?). AndReplication Server is used for replicating (copying) data from one OUD to another OUD or even to ODSEE server (we will talk more about replication in this post). You can read about my first post on OUD here. In this current article, I will write about replication server and advanced replication setup for Oracle Unified Directory.

Many people want a step by step guide (kind of cheat sheet) to setup something like OUD or OID for replication. Unfortunately I am not going to give you that here. In my personal opinion, that (cheat sheet) is not a right approach at all and will not be helpful in the long run for gaining concepts or knowledge. First of all, we need to give importance to the basic concepts behind how something works.

First of all, read OUD Documentation

Product Documentation must be read before you plan your deployment. You can find the OUD Documentation here. This link is for OUD Version 11.1.1. Make sure to refer the latest product manual. Documentation provides lot of details about the product and save lot of time with investigation later. For Replication, you need to start with “Architecture Reference” Guide.

When do you want to setup replication?

There should be a reason, right? If there is no reason, then there is no need for you to setup replication at all. Instead, you can have a beer and pass the time happily doing something else.

Ideally, you need replication setup for “High Availability” and “Performance”. Usually, there will be multiple instances of OUD Directory Server processes running in Production. Let’s say we need to have around four OUD Directory Servers (and four more for Business Continuity/Disaster Recovery).

Unfortunately, there is no single process to update all the eight OUD Directory Servers in our example. We need to find a mechanism to synchronize the directory entries across these servers.  For this, we need to use the OUD Replication Server Component.

Securing the Replication Traffic

We don’t want network sniffers taking away critical user information (even inside the internal network, it is possible). We need to encrypt the traffic between the replication servers. Do not consider setting up a Replication Server communication without encrypted traffic.

Since OUD provided identity data, all the network traffic is prone to sniffing attacks. Always use encrypted or secure connections to OUD or to any LDAP Directory.

Deciding a Replication Method to use

Next important thing is to decide what replication method you are going to use. This is mostly site specific and you need to know lot of details before deciding a replication method to use. I am planning to use the following sample architecture for this post. Let’s understand our sample OUD Architecture first.

 

Here are the quick components of the architecture:

  • We have one master OUD Server called PROD-01. All the updates to the directory happens here. Most probably, HR System will update the directory. Also, Updates can happen using a custom developed application plug-in for LDAP Directory or using a Identity and Access Management System (IAM) system such as Oracle Identity Manager or Tivoli Identity Manager.
  • PROD-02 will be used with PROD-01 for High Availability and Performance in this Production Deployment.
  • In Disaster Recovery deployment, we have PROD-03 and PROD-04 servers. These servers need to synchronize the user data from the master server PROD-01.

One way to setup replication is by provisioning users into all the six OUD Directory Servers by an Identity and Access Management (IAM) System (such as Oracle Identity Manager or Tivoli Identity Manager). However this provisioning can be time consuming to complete because it will be treated as updating six different LDAP Directories. So a better way to achieve this is using a Replication Server.

We will continue setting up the Replication Server for this architecture. Lets meet in another post - Until then.

Transitioning to a New World – An Analytical Perspective

Recently, I had the opportunity to speak at the Silicon India Business Intelligence Conference. The topic I chose for the discussion was focused on providing the BI & Analytics perspective for companies transitioning to a new world. You can view my presentation at this link –http://bit.ly/VLDDfF

The gist of my presentation is given below:

1)      First, established the fact that the world indeed is changing by showing some statistics:

  • Data Deluge: Amount of digital data created in the world right now stands at 7 Zettabytes per annum (1 Zettabyte = 1 Trillion Terabytes)
  • Social Media: Facebook has touched 1 Billion users which makes it the 3rd largest country in the world
  • Cloud: Tremendous amount of cloud infrastructure is being created
  • Mobility: There are 4.7 billion mobile subscribers which covers 65% of world population

2)      Enterprises face a very different marketplace due to the profound changes taking place in the way people buy, sell, interact with one another, spend their leisure time etc.

3)      To ensure that BI can help business navigate the new normal, there are 3 key focus areas.

  • Remove Bottlenecks – Give business what they want
  • Enhance Intelligence
  • End to End Visibility by strengthening the fundamentals

For each of the 3 areas mentioned above, I gave some specific examples of the trends in the BI space.

1)      For Removing Bottlenecks, the impact of in-memory and columnar databases were elaborated.

2)      For enhancing intelligence, working with unstructured data and using big data techniques were discussed.

3)      For the 3rd point, the focus was on strengthening the fundamentals in the BI landscape.

Please do check out my complete presentation at http://bit.ly/VLDDfF and let me know your views.

Thanks for reading.

Tuesday, 16 October 2012

Collaborative Data Management – Need of the hour!

Well the topic may seem like a pretty old concept, yet a vital one in the age of Big Data, Mobile BI and the Hadoops! As per FIMA 2012 benchmark report Data Quality (DQ) still remains as the topmost priority in data management strategy:

What gets measured improves!’ But often Data Quality (DQ) initiative is a reactive strategy as opposed to being a pro-active one; consider the impact bad data could have in a financial reporting scenario – brand tarnish, loss of investor confidence.

But are the business users aware of DQ issue? A research report by ‘The Data Warehousing Institute’, suggested that more that 80% of the business managers surveyed believed that the business data was fine, but just half of their technical counterparts agreed on the same!!! Having recognized this disparity, it would be a good idea to match the dimensions of data and the business problem created due to lack of data quality.

Data Quality Dimensions – IT Perspective

 

  • Data Accuracy – the degree to which data reflects the real world
  • Data Completeness – inclusion of all relevant attributes of data
  • Data Consistency –  uniformity of data  across the enterprise
  • Data Timeliness – Is the data up-to-date?
  • Data Audit ability – Is the data reliable?

 

Business Problems – Due to Lack of Data Quality

Department/End-Users

Business Challenges

Data Quality Dimension*

Human Resources

The actual employee performance as reviewed by the manager is not in sync with the HR database, Inaccurate employee classification based on government classification groups – minorities, differently abled

Data consistency, accuracy

Marketing

Print and mailing costs associated with sending duplicate copies of promotional messages to the same customer/prospect, or sending it to the wrong address/email

Data timeliness

Customer Service

Extra call support minutes due to incomplete data with regards to customer and poorly-defined metadata for knowledge base

Data completeness

Sales

Lost sales due to lack of proper customer purchase/contact information that paralysis the organization from performing behavioral analytics

Data consistency, timeliness

‘C’ Level

Reports that drive top management decision making are not in sync with the actual operational data, getting a 360o view of the enterprise

Data consistency

Cross Functional

Sales and financial reports are not in sync with each other – typically data silos

Data consistency, audit ability

Procurement

The procurement level of commodities are different from the requirement of production resulting in excess/insufficient inventory

Data consistency, accuracy

Sales Channel

There are different representations of the same product across ecommerce sites, kiosks, stores and the product names/codes in these channels are different from those in the warehouse system. This results in delays/wrong items being shipped to the customer

Data consistency, accuracy

*Just a perspective, there could be other dimensions causing these issues too

As it is evident, data is not just an IT issue but a business issue too and requires a ‘Collaborative Data Management’ approach (including business and IT) towards ensuring quality data. The solution is multifold starting from planning, execution and sustaining a data quality strategy. Aspects such as data profiling, MDM, data governance are vital guards that helps to analyze data, get first-hand information on its quality and to maintain its quality on an on-going basis.

Collaborative Data Management – Approach

Key steps in Collaborative Data Management would be to:

  • Define and measure metrics for data with business team
  • Assess existing data for the metrics – carry out a profiling exercise with IT team
  • Implement data quality measures as a joint team
  • Enforce a data quality fire wall (MDM) to ensure correct data enters the information ecosystem as a governance process
  • Institute Data Governance and Stewardship programs to make data quality a routine and stable practice at a strategic level

This approach would ensure that the data ecosystem within a company is distilled as it involves business and IT users from each department at all hierarchy.

Thanks for reading, would appreciate your thoughts.