Showing posts with label BI. Show all posts
Showing posts with label BI. Show all posts

Friday, 24 August 2012

Emerging DB Technology – Columnar Database


Today’s Top Data-Management Challenge:

Businesses today are challenged by the ongoing explosion of data. Gartner is predicting data growth will exceed 650% over the next five years. Organizations capture, track, analyze and store everything from mass quantities of transactional, online and mobile data, to growing amounts of machine-generated data. In fact, machine-generated data, including sources ranging from web, telecom network and call-detail records, to data from online gaming, social networks, sensors, computer logs, satellites, financial transaction feeds and more, represents the fastest-growing category of Big Data. High volume web sites can generate billions of data entries every month.

As volumes expand into the tens of terabytes and even the petabyte range, IT departments are being pushed by end users to provide enhanced analytics and reporting against these ever increasing volumes of data. Managers need to be able to quickly understand this information, but, all too often, extracting useful intelligence can be like finding the proverbial ‘needle in the haystack.

How do columnar databases work?

The defining concept of a column-store is that the values of a table are stored contiguously by column. Thus the classic supplier table from supplier and parts database would be stored on disk or in memory something like:  S1S2S3S4S52010302030LondonParis Paris LondonAthensSmithJonesBlakeClarkAdams



This is in contrast to a traditional row-store which would store the data more like this:
S120LondonSmithS210Paris JonesS330Paris BlakeS420LondonClarkS530AthensAdams
From this simple concept flows all of the fundamental differences in performance, for better or worse, between a column-store and a row-store. For example, a column-store will excel at doing aggregations like totals and averages, but inserting a single row can be expensive, while the inverse holds true for row-stores. This should be apparent from the above diagram.

The Ubiquity of Thinking in Rows:

Organizing data in rows has been the standard approach for so long that it can seem like the only way to do it. An address list, a customer roster, and inventory information—you can just envision the neat row of fields and data going from left to right on your screen.

Databases such as Oracle, MS SQL Server, DB2 and MySQL are the best known row-based databases.
Row-based databases are ubiquitous because so many of our most important business systems are transactional.
Data Set Ex:  See the below data set contents of 20 columns X 50 Millions of Rows.


Example Data Set
Row-oriented databases are well suited for transactional environments, such as a call center where a customer’s entire record is required when their profile is retrieved and/or when fields are frequently updated.

Other examples include:
• Mail merging and customized emails
• Inventory transactions
• Billing and invoicing

Where row-based databases run into trouble is when they are used to handle analytic loads against large volumes of data, especially when user queries are dynamic and ad hoc.

To see why, let’s look at a database of sales transactions with 50-days of data and 1 million rows per day. Each row has 30 columns of data. So, this database has 30 columns and 50 million rows. Say you want to see how many toasters were sold for the third week of this period. A row-based database would return 7-million rows (1 million for each day of the third week) with 30 columns for each row—or 210-million data elements. That’s a lot of data elements to crunch to find out how many toasters were sold that week. As the data set increases in size, disk I/O becomes a substantial limiting factor since a row-oriented design forces the database to retrieve all column data for any query.

As we mentioned above, many companies try to solve this I/O problem by creating indices to optimize queries. This may work for routine reports (i.e. you always want to know how many toasters you sold for the third week of a reporting period) but there is a point of diminishing returns as load speed degrades since indices need to be recreated as data is added. In addition, users are severely limited in their ability to quickly do ad-hoc queries (i.e. how many toasters did we sell through our first Groupon offer? Should we do it again?) that can’t depend on indices to optimize results.


Pivoting Your Perspective: Columnar Technology

Column-oriented databases allow data to be stored column-by-column rather than row-by-row. This simple pivot in perspective—looking down rather than looking across—has profound implications for analytic speed. Column-oriented databases are better suited for analytics where, unlike transactions, only portions of each record are required. By grouping the data together this way, the database only needs to retrieve columns that are relevant to the query, greatly reducing the overall I/O.

Returning to the example in the section above, we see that a columnar database would not only eliminate
43 days of data, it would also eliminate 28 columns of data. Returning only the columns for toasters and units sold, the columnar database would return only 14 million data elements or 93% less data. By returning so much less data, columnar databases are much faster than row-based databases when analyzing large data sets. In addition, some columnar databases (such as Infobright®) compress data at high rates because each column stores a single data type (as opposed to rows that typically contain several data types), and allow compression to be optimized for each particular data type. Row-based databases have multiple data types and limitless range of values, thus making compression less efficient overall.

Thanks For Reading This Blog. View More:: BI Analytics

Tuesday, 6 December 2011

HP Extensibility Accelerator


The rapid adoption of Rich Internet Application (RIA) technologies and Web 2.0 innovations has significant implications for both web users and the teams involved in the functional testing of web-enabled applications. The advent of Web 2.0 applications has created unprecedented challenges for organizations that focus on automated functional testing.  Web 2.0 enabled applications can leverage various technologies on the client side and through web browsers. With Web 2.0, the client side of the application processes more scripted code and rich presentation frameworks than in traditional environments. This shift of processing to the client side challenges the capabilities of all toolsets designed for the functional testing of Web-Enabled Applications.

Testing challenges in Web 2.0:

Some of the typical challenges faced were as follows:
  • Web pages are dynamic and asynchronous.
  • Portions of web pages can now be refreshed automatically to give users updates on sports scores, stock quotes, etc.
  • Current activities of people they connect with via social networking sites
  • Users have more control than ever before. Via sites such as iGoogle, users can now create their own home pages that bring together information and content from across the web like, local weather forecasts, headlines from prominent news outlets, and videos from YouTube etc.
  • Client side of the application processes more scripted code and rich presentation frameworks than in traditional environments
Why HP Extensibility Accelerator?
HP thought of putting forth advanced tools to test Web 2.0 technologies with HP Functional Testing. As a result, HP came up with a new accelerator for functional testing to overcome those challenges. The Extensibility Accelerator for HP Functional Testing provides a Visual Studio-like IDE that accelerates and facilitates the design, development and deployment of HP QuickTest Professional Add-in Extensibility support sets.

These support sets extend the HP Functional Testing Web Add-in so you can test Web controls that are not supported out-of-the-box.

Evolution of HP Extensibility Accelerator:

Extensibility is enhanced and accelerated with the new HP Extensibility Accelerator for Functional Testing software, which provides an environment that speeds the development of Web Add-in Extensibility toolkit.

What is HP Extensibility Accelerator?

HP Extensibility Accelerator for Functional Testing is a separate utility that can be used on a machine with or without an installed copy of HP Functional Testing.

It provides a user interface and special tools that help us define new test object classes, map those test object classes to the controls in our application, and teach QTP how to identify the controls, perform operations on the controls and retrieve their properties.

Features of HP Extensibility Accelerator:
  • Creating and defining the test object classes using JavaScript functions for the custom controls.
  • HP Extensibility Accelerator provides JavaScript editing capabilities and debugging tools to facilitate the writing of these functions.
  • Maps the test object class to your control and application, and it automatically identifies the rules that will teach HP QuickTest Professional how to recognize the test object class in your application.
  • The HP Extensibility Accelerator IDE simplifies the process of creating and editing the test object/toolkit configuration XML files.
  • The HP Extensibility Accelerator deployment capabilities enable you to automatically deploy your new toolkit support set to HP QuickTest Professional or to package it so you can share it with other HP QuickTest Professional users
Conclusion:

With the HP Extensibility Accelerator for Functional Testing, we’re making it easy for our users and partners to create their own extensibility assets and extend our software to support web controls that are not supported out of the box. With the hundreds of Ajax toolkits in use today and new ones coming out each month, the HP Extensibility Accelerator provides an extremely important set of tools for your organization.

The software itself can be installed and used on a machine that does not have HP QuickTest Professional on it. Custom toolkits developed with the software can then be deployed on one or more systems that are running HP QuickTest Professional.