Showing posts with label Software Quality Assurance. Show all posts
Showing posts with label Software Quality Assurance. Show all posts

Tuesday, 31 January 2012

CALL WAITING….. ??? – Costly Miss Costly Fix


On Jan. 15, 1990, around 60,000 AT & T long-distance customers tried to place long-distance calls as usual — and got nothing. Behind the scenes, the company’s 4ESS long-distance switches, all 114 of them, kept rebooting in sequence. AT&T assumed it was being hacked, and for nine hours, the company and law enforcement tried to work out what was happening. In the end, AT&T uncovered the culprit: an obscure fault in its new Oracle Software.

Here’s how the switches were supposed to work: If one switch gets congested, it sends a “do not disturb” message to the next switch, which picks up its traffic. The second switch resets itself to keep from disturbing the first switch. Switch 2 checks back on Switch 1, and if it detects activity, it does another reset to reflect that Switch 1 is back online. So far, so simple.

The month before the crash, AT & T Tweaked the code to speed up the process. The trouble was, things were too fast. The first server to overload sent two messages, one of which hit the second server just as it was resetting. The second server assumed that there was a fault in its CCS7 internal logic and reset itself. It put up its own “do not disturb” sign and passed the problem on to a third switch.

The third switch also got overwhelmed and reset itself, and so the problem cascaded through the whole system. All 114 switches in the system kept resetting themselves, until engineers reduced the message load on the whole system and the wave of resets finally broke.

In the meantime, AT&T lost an estimated $60 million in long-distance charges from calls that didn’t go through. The company took a further financial hit a few weeks later when it knocked a third off its regular long-distance rates on Valentine’s Day to make amends with customers

Wednesday, 18 January 2012

Test Consulting: How to Improve a Quality Assurance Area


Is your client having difficulty to measure QA performance? Is your client undecided on what test strategy to follow for new application implementation? Is your client looking for testing tool recommendations or need to improve the usage of the existing ones?

Hexaware has a dedicated Strategic Consulting practice where testing experts are involved in providing test consulting services to clients. During the consulting engagements, Hexaware assesses and evaluates the Approach, People and Technology and fill in the gaps by bringing in our domain experience, best practices, frameworks, tools experience, etc. The consulting services also include tools selection, tools optimization and TCoE Creation and/or optimization.

If you want to learn more about test strategy, the next information will help you to execute a test strategy engagement.

Test Strategy Approach
The first step is to identify the problem(s) that the client is facing and define the strategy objectives. After that, I recommend to follow this approach to execute a test strategy project:

Assessment Areas
As part of the information gathering phase, we leverage its proprietary ATPTM (Approach, People, and Technology) methodology to focus on the right areas and meet the strategy objectives. Hexaware’s APT™ Methodology is the foundation of all of our QA service offerings, below is the description of each component:
• The Approach component is designed to lay the foundation for the processes that each client use as part of testing
People is the component of the IT organization with focus on testing, Hexaware analyze the groups, roles and responsibilities involved in QA and testing
• The Technology component includes the use of QA and automation testing tools for efficiency to optimize technology and lower costs.


At the end of the project, we provide to the client the following component:
•Current State: An analysis of the current state with regard to the testing objectives
•Gaps: The gaps found between the current state and the best practices and the desired state
•Recommendations: Our recommendations to close the gaps and meet the organization objectives
•Implementation Road Map: A recommended path to follow in order to implement the recommendations.
As a result of the analysis phase, we show to our clients the current state in a quantitative graph. This graph evaluates all relevant aspects of an IT organization and prioritizes each category according to the testing objectives. One example of this graph is showed below.



This was an assessment provided as part of a test strategy we created for a leading bank in Mexico for a T24 product implementation. The benefits showed in this strategy was to reduce testing cycles by 30%, automate at least 50% of the manual test cases and have a defect free implementation using robust and repeatable testing processes.
Other examples of metrics commonly used as part of the strategy objectives are the increment of automation coverage by 30%, increase productivity by 25%, reduce overall testing cost by 15%, etc.

A test consulting practice is an area full of innovation, industry best practices and shared experiences. Now with Hexaware blogs all of us will be able to formally share our experience and our colleagues can leverage them for future assignments.

To know More: Visit Quality Assurance Area 

Tuesday, 17 January 2012

Good Attributes to be kept in mind for designing Automated Test Cases


Programming remains the biggest & most critical component of test case automation. Hence designing & coding of test cases is extremely important, for their execution and maintenance to be effective.

Some fundamental attributes of good automated test cases which can be followed,
  • Simple
  • Modular
  • Robust
  • Reusability
  • Maintainable
  • Documented
  • Independant
1) Simplicity: The test case should have a single objective. Multi-objective test cases are difficult to understand and design. There should not be more than 10-15 test steps per test case, May be depending on the process steps may increase. However 10 to 15 steps note a good clarity test case. Multipurpose test cases are likely to break or give misleading results. If the execution of a complex test leads to a system failure, it is difficult to isolate the cause of the failure.

2) Modularity: Each test case should have a setup and cleanup phase before and after the execution test steps, respectively. The setup phase ensures that the initial conditions are met before the start of the test steps. Similarly, the cleanup phase puts the system back in the initial state, that is, the state prior to setup. Each test step should be small and precise. However we can’t expect all test cases to have set up and cleanup process, it may vary accordingly.. The test steps are building blocks from reusable libraries that are put together to form multi-step test cases.

3) Robustness and Reliability: A test case verdict (pass or fail) should be assigned in such a way that it should be unambiguous and understandable. Robust test cases can ignore trivial failures such as one pixel mismatch in a graphical display. Care should be taken so that false test results are minimized. The test cases must have built-in mechanisms to detect and recover from errors. For example, a test case need not wait indefinitely if the software under test has crashed. Rather, it can wait for a while and terminate an indefinite wait by using a timer mechanism.

4) Reusability: The test steps are built to be configurable, that is, variables should not be hard coded. They can take values from a single configurable file(Data Tables). Attention should be given while coding test steps to ensure that a single global variable is used, instead of multiple, decentralized, hard-coded variables. Test steps are made as independent of test environments as possible. The automated test cases are categorized into different groups so that subsets of test steps and test cases can be extracted to be reused for other platforms and/or configurations. Finally, in GUI automation hard-coded screen locations must be avoided.

5) Maintainability: Any changes to the software under test will have an impact on the automated test cases and may require necessary changes to be done to the affected test cases. Therefore, it is required to conduct an assessment of the test cases that need to be modified before an approval of the project to change the system. The test suite should be organized and categorized in such a way that the affected test cases are easily identified. If a particular test case is data driven, it is recommended that the input test data be stored separately from the test case and accessed by the test procedure as needed. The test cases must comply with coding standard formats. Finally, all the test cases should be controlled with a version control system.

6) Documented: The test cases and the test steps must be well documented. Each test case gets a unique identifier, and the test purpose is clear and understandable. Author name, date of creation, and the last time it was modified must be documented. There should be traceability matrix to the features and requirements being checked by the test case. The situation under which the test case cannot be used is clearly described. The environment requirements are clearly stated with the source of input test data (if applicable). Finally, the result, that is, pass or fail, evaluation criteria are clearly described.

7) Independent and Self-sufficient: Each test case is designed as a cohesive entity, and test cases should be largely independent of each other. Each test case consists of test steps, which are naturally linked together. The predecessor and successor of a test step within a test case should be clearly understood.

Know More About: Automated Test Cases