Tuesday, 16 October 2012

Transitioning to a New World – An Analytical Perspective

Recently, I had the opportunity to speak at the Silicon India Business Intelligence Conference. The topic I chose for the discussion was focused on providing the BI & Analytics perspective for companies transitioning to a new world. 

The gist of my presentation is given below:

1)      First, established the fact that the world indeed is changing by showing some statistics:

  • Data Deluge: Amount of digital data created in the world right now stands at 7 Zettabytes per annum (1 Zettabyte = 1 Trillion Terabytes)
  • Social Media: Facebook has touched 1 Billion users which makes it the 3rd largest country in the world
  • Cloud: Tremendous amount of cloud infrastructure is being created
  • Mobility: There are 4.7 billion mobile subscribers which covers 65% of world population

2)      Enterprises face a very different marketplace due to the profound changes taking place in the way people buy, sell, interact with one another, spend their leisure time etc.

3)      To ensure that BI can help business navigate the new normal, there are 3 key focus areas.

  • Remove Bottlenecks – Give business what they want
  • Enhance Intelligence
  • End to End Visibility by strengthening the fundamentals

For each of the 3 areas mentioned above, I gave some specific examples of the trends in the BI space.

1)      For Removing Bottlenecks, the impact of in-memory and columnar databases were elaborated.

2)      For enhancing intelligence, working with unstructured data and using big data techniques were discussed.

3)      For the 3rd point, the focus was on strengthening the fundamentals in the BI landscape.

Please do check out my complete presentation at http://bit.ly/VLDDfF and let me know your views.

Thanks for reading.

Wednesday, 12 September 2012

Hexaware sees strong order pipeline; 20% growth: Nishar

Atul Nishar, chairman, Hexaware, says that we remain quite positive on growing at 20% or more. We feel that if the situation improves with US elections and no debacle in Europe then the environment could only improve.


Atul Nishar, Chairman, Hexaware
Atul Nishar, chairman, Hexaware , says that we remain quite positive on growing at 20% or more. We feel that if the situation improves with US elections and no debacle in Europe then the environment could only improve.

He also says that currently there are five deals in the pipeline and one is in the advance stage. The deals are spread across from the United States and Europe, and across major verticals like capital markets, travel and emerging verticals. And in the last nine quarters the company has signed seven large deals.


Below is the edited transcript of his interview to CNBC-TV18.


Q: Hexaware recently had a deal and there have been reports or analyst notes which suggest that the company is in conversation with potential clients for four deals and one is in advance stages. Do you think something could fructify in the near-term?


A: Currently, there are five deals in the pipeline and one is in the advance stage. The deals are spread across from the United States and Europe, and across major verticals like capital markets, travel and emerging verticals. And in the last nine quarters we have signed seven large deals.


Q: Are billings under pressure even if the deals are coming? Are they coming from tight fisted managements?


A: In over last two years, we have marginally improved our average billing on both on onsite and offshore. We don’t see any pressure on pricing on the IT industry. Repeatedly, we have guided that our pricing should be assumed to be stable.


The important point is that the client want value, greater performance, result oriented projects and fixed priced or greater commitment by off shoring companies.  Clients do want to cut their costs and get more value, but they also know if it is all done at the cost of the service provider, it will not sustain that particular situation.


Q: How much do you think is Nasscom’s 13-14% growth target under threat? What might it fall to half or high single digits?


A: Nasscom has guided for 11-14% and it is a wide enough range. In the industry we saw that some companies like mid-sized companies and companies who are scale players have also done very well. It is a mixed reason. We have seen more client specific issues coincidence for downsizing for whatever reason that may dent revenue that doesn’t mean they will not be able to grow in future.  


Q: Do you think Nasscom will hold the lower end of their 11% range?


A: That is the current optimism. So, there is no reason to believe that there is material change from the guided number.


Q: The one concern around Hexaware has been for some time that you have seen an improvement in margins, but going forward it would come under pressure because in Q3 wage hikes are expected to shave off margins to a certain extent. How do you respond to that?


A: In Q2, ours being calendar year, Hexaware reported 22.9% EBITDA which was higher than Q1. We gave normal 10% increment to all our off shore employees. The impact was absorbed in our margin and in spite of that the margin improved.


We also absorbed the significant visa costs that traditionally come in that quarter. In the coming quarter there will be onsite increase in wages. For off shore workers the date of increment is April 1 and for onsite employees the date is July 1, which remains unchanged. And we feel with this we can guide stable margins.


We are proud that at Hexaware, we have grown at higher than the industry average at good margins. We don’t believe in taking new deals by compromising on margins in any manner.


Q: So at this juncture you don't want to change your guidance of 20% dollar revenue growth any which way, up or down?


A: We remain quite positive on growing at 20% or more. We feel that if the situation improves with US elections and no debacle in Europe then the environment could only improve. 


 

 

 

Friday, 24 August 2012

Emerging DB Technology – Columnar Database


Today’s Top Data-Management Challenge:

Businesses today are challenged by the ongoing explosion of data. Gartner is predicting data growth will exceed 650% over the next five years. Organizations capture, track, analyze and store everything from mass quantities of transactional, online and mobile data, to growing amounts of machine-generated data. In fact, machine-generated data, including sources ranging from web, telecom network and call-detail records, to data from online gaming, social networks, sensors, computer logs, satellites, financial transaction feeds and more, represents the fastest-growing category of Big Data. High volume web sites can generate billions of data entries every month.

As volumes expand into the tens of terabytes and even the petabyte range, IT departments are being pushed by end users to provide enhanced analytics and reporting against these ever increasing volumes of data. Managers need to be able to quickly understand this information, but, all too often, extracting useful intelligence can be like finding the proverbial ‘needle in the haystack.

How do columnar databases work?

The defining concept of a column-store is that the values of a table are stored contiguously by column. Thus the classic supplier table from supplier and parts database would be stored on disk or in memory something like:  S1S2S3S4S52010302030LondonParis Paris LondonAthensSmithJonesBlakeClarkAdams



This is in contrast to a traditional row-store which would store the data more like this:
S120LondonSmithS210Paris JonesS330Paris BlakeS420LondonClarkS530AthensAdams
From this simple concept flows all of the fundamental differences in performance, for better or worse, between a column-store and a row-store. For example, a column-store will excel at doing aggregations like totals and averages, but inserting a single row can be expensive, while the inverse holds true for row-stores. This should be apparent from the above diagram.

The Ubiquity of Thinking in Rows:

Organizing data in rows has been the standard approach for so long that it can seem like the only way to do it. An address list, a customer roster, and inventory information—you can just envision the neat row of fields and data going from left to right on your screen.

Databases such as Oracle, MS SQL Server, DB2 and MySQL are the best known row-based databases.
Row-based databases are ubiquitous because so many of our most important business systems are transactional.
Data Set Ex:  See the below data set contents of 20 columns X 50 Millions of Rows.


Example Data Set
Row-oriented databases are well suited for transactional environments, such as a call center where a customer’s entire record is required when their profile is retrieved and/or when fields are frequently updated.

Other examples include:
• Mail merging and customized emails
• Inventory transactions
• Billing and invoicing

Where row-based databases run into trouble is when they are used to handle analytic loads against large volumes of data, especially when user queries are dynamic and ad hoc.

To see why, let’s look at a database of sales transactions with 50-days of data and 1 million rows per day. Each row has 30 columns of data. So, this database has 30 columns and 50 million rows. Say you want to see how many toasters were sold for the third week of this period. A row-based database would return 7-million rows (1 million for each day of the third week) with 30 columns for each row—or 210-million data elements. That’s a lot of data elements to crunch to find out how many toasters were sold that week. As the data set increases in size, disk I/O becomes a substantial limiting factor since a row-oriented design forces the database to retrieve all column data for any query.

As we mentioned above, many companies try to solve this I/O problem by creating indices to optimize queries. This may work for routine reports (i.e. you always want to know how many toasters you sold for the third week of a reporting period) but there is a point of diminishing returns as load speed degrades since indices need to be recreated as data is added. In addition, users are severely limited in their ability to quickly do ad-hoc queries (i.e. how many toasters did we sell through our first Groupon offer? Should we do it again?) that can’t depend on indices to optimize results.


Pivoting Your Perspective: Columnar Technology

Column-oriented databases allow data to be stored column-by-column rather than row-by-row. This simple pivot in perspective—looking down rather than looking across—has profound implications for analytic speed. Column-oriented databases are better suited for analytics where, unlike transactions, only portions of each record are required. By grouping the data together this way, the database only needs to retrieve columns that are relevant to the query, greatly reducing the overall I/O.

Returning to the example in the section above, we see that a columnar database would not only eliminate
43 days of data, it would also eliminate 28 columns of data. Returning only the columns for toasters and units sold, the columnar database would return only 14 million data elements or 93% less data. By returning so much less data, columnar databases are much faster than row-based databases when analyzing large data sets. In addition, some columnar databases (such as Infobright®) compress data at high rates because each column stores a single data type (as opposed to rows that typically contain several data types), and allow compression to be optimized for each particular data type. Row-based databases have multiple data types and limitless range of values, thus making compression less efficient overall.

Thanks For Reading This Blog. View More:: BI Analytics

Performance Center Best Practices


For Performance Testing we have started using HP Performance Center due to many advantages it provides to the testing team. We have listed out some of the best practices which can be followed when using Performance Center.

Architecture – Best Practices

  • Hardware Considerations
    • CPU, Memory, Disk sized to match the role and usage levels
    • Redundancy added for growth accommodation and fault-tolerance
    • Never install multiple critical components on the same hardware
  • Network Considerations
    • Localization of all PC server traffic - Web to Database, Web to File Server, Web to Utility Server, Web to Controllers, Controller to Database, Controller to File Server, Controller to Utility Server.
    • Separation of operational and virtual user traffic – PC operational traffic should not share same network resources as virtual user traffic – for optimal network performance.
  • Backup and Recovery Considerations
    • Take periodic backup Oracle Database and File System (\\<fileserver>\LRFS)
  • Backups of PC servers and hosts are optional.
  • Monitoring Considerations
    • Monitor services (eg. SiteScope) should be employed to manage availability and responsiveness of each PC component

Configuration – Best Practice

  • Set ASP upload buffer to the maximum size of a file that you will permit to be uploaded to the server.
    • Registry: HKLM\SYSTEM\CurrentControlSet\Services\w3svc\Parameters
  • Modify MaxClientRequestBuffer
    • create as a DWORD if it does not exist)
    • Ex. 2097152 is 2 Mb
  • Limit access to the PC File System (LRFS) for security
    • Performance Center User (IUSR_METRO) needs “Full Control”
  • We recommend 2 LoadTest Web Servers when
    • Running 3 or more concurrent runs
    • Having 10 plus users viewing tests
  • The load balancing needs an external, web session based, load balancer
  • In Internet Explorer, set “Check for newer versions of stored pages” to “Every visit to the page”
    • NOTE: This should be done on the client machines that are accessing the Performance Center web sites

Script Repository – Best Practice

  • Use VuGen integration for direct script upload
  • Ensure dependent files are within zip file
  • Re-configure script with optimal RTS
  • Validate script execution on PC load generators
  • Establish meaningful script naming convention
  • Clean-up script repository regularly

Monitor Profile – Best Practice

  • Avoid information overload
    • Min-Max principle – Minimum metrics for maximum detection
  • Consult performance experts and developers for relevant metrics
    • Standard Process Metrics (CPU, Available Memory, Disk Read/Write Bytes, Network Bandwidth Utilization)
    • Response Times / Durations (Avg. Execution Time)
    • Rates and Frequencies (Gets/sec, Hard Parses/sec)
    • Queue Lengths (Requests Pending)
    • Finite Resource Consumption (JVM Available Heap Size, JDBC Pool’s Active Connections)
    • Error Frequency (Errors During Script Runtime, Errors/sec)

Load Test – Best Practice

  • General
  • Create new load test for any major change in scheduling logic or script types
  • Use versioning (by naming convention) to track changes
  • Scripts
  • When scripts are updated with new run-logic settings, remove and reinsert updated script in load test
  • Scheduling
  • Each ramp-up makes queries to Licensing (Utility) Server, and LRFS file system.  Do not ramp at intervals less than 5 seconds.
  • Configure ramp-up quantity per interval to match available load generators
  • Do not run (many/any) users on Controller

Timeslots – Best Practice

  • Scheduling
    • Always schedule time slots in advance of load test
    • Always schedule extra time (10-30 minutes) for large or critical load tests
    • Allow for gaps between scheduled test runs (in case of emergencies)
  • Host Selection
    • Use automatic host selection whenever possible
    • Reserve manual hosts only when specific hosts are needed (because of runtime configuration requirements)
The above mentioned solutions will help you to make use of Performance Center without any issues and will also save you a lot of time by avoiding some issues which might arise because of not doing some of the above mentioned practices.

Thanks For Reading This Block. Want To Know More Visit At: Performance Center Best Practices

Wednesday, 22 August 2012

Job: Peoplesoft Tester In Chennai

Title

Peoplesoft Tester

Categories

India

Grade

G4

Skill

Peoplesoft, HRMS Testing, Payroll

Start Date

21-08-2012

Location

Chennai

Job Information

3-5 years of experience in ERP Related Product Testing.

Knowledge of complete testing life-cycle and different testing methodologies.

Min. 2 – 3 years of hands on experience on PeopleSoft – HRMS.

Min. 1 year of experience on writing Test Scripts on PS Payroll Module.

Good knowledge on HP QC.

Strong analytical and troubleshooting skills.

Unit

10

 

Apply Now