Wednesday, January 11, 2012

Application Simulation Models

ASM refers to the way in which the features of application under test are simulated. In other wordswhich end/real user actions are simulated and how when an application is put under test

This simulation model can be derived in various ways depending up on the requirement to test and application/system type, complexity etc.,

Basic simulation models follow two ways

       User session based

       Transaction based

User session based

This type of simulation revolves around how the real user will behave(in case of vanilla versions) or might have been doing when an application is up and on in live/production environment

The key for this simulation is to understand the user session characteristics in real world. Some of them can be

       User session concurrency

       User sessions ramping up pattern

       Active session actions at any given point of time (TPS)

       Avg no. of trans per session,

       Weighing transactions as per volumes

       Visualizing the application usage patterns

       Session and transaction patterns over a period of time (round the clock, peak week days, week ends, holidays, year ends, month ends etc.,)

Develop code snippets or programs which will work on the production logs or databases to obtain the above mentioned session characteristics and finally derive a navigation profile 

     Advantages of this kind of simulation are:

       More realistic and more closer to the production behavior of the application under load

       Easy and accurate predictions of potential performance issues that may arise in near future

       Accurate like to like comparison between production and pre production results

       Leads to more confident decisions while giving a ‘Go Live’ signal to the application  

Other statistical analysis that can be done(but not limited to) in this kind of simulations 

 

A sample below shows how we can visualize the production usage patterns n production

 

 

Transaction Based Models

A transaction can be defined as an action or a group of actions done by a user or a subsystem/component of an application

 Transaction based simulation models are used to load/stress test the system while it does not reproduce exact end user usage pattern 

Transaction based simulation can be applicable to the end user facing systems however they are more applicable to the transaction based sub systems like middle layer integration subsystems talk everything in terms of messages else called transactions

Transaction based simulations are easy compared to user session based simulation models. These simulations can follow simple formula as shown below

 

Transaction rate (TPS)       =                              N (No. of vusers)

                                    -----------------------------------------------------------       

       Think time + Pacing + transaction response time + code logic execution time       

Performance Testing And Engineering

Performance Testing and Engineering

 

I would say testing means to validate a product against its requirements.

Engineering means design-develop-test-tune.  This means simply continuous improvement. This is a broader level task which involves strategic thinking about the system (@enterprise level) and at a specific application/asset level in performance perspective. This involves getting involved in early stages of solution design with a performance hat on head and give performance indicators to the design to achieve better performance and to nullify the performance hot spots instead of moving the hotspots from one system to other.

 

So, how a performance testing is done? Look below.

 

Performance Testing approach – is all about what you do. i.e. simply what your approach to performance test a product

What it contains?

What is the system about

What is the need for performance testing

How we do it ?

State performance objectives will be identified and stated at a high level i.e. say if the change/program/project is about migration then performance objective is to achieve a better or similar outcome/behavior from the system after migration under production volumes

State that Performance requirements will be identified i.e. performance category of the NFRs.

Describe how the Performance testing strategy will be laid down.

State the deliverables (PT approach and estimate, test strategy, test plan, test cases/scenarios (in QC), execution reports, DSRs, TSR, CRs)

State the resource requirements

Performance Test Architect – To prepare the strategy as said above

Performance Test Lead       - To deliver the test plan, monitor setup,

assisting test script development, scenario executions,

analysis and DSR, Execution Reports and PTSR

Performance Test Analysts  - To develop the scripts, test data,

scenarios, monitoring and executions

Sate about team structure and escalation procedure

Put the schedule in conjunction with overall testing schedule and project delivery schedule

Estimate the effort in man days

 

 

Performance Test Strategy                 

What it says is…

What is the system about in business perspective

How the system designed – i.e. solution design

How it integrates with the other systems in the enterprise/organization

What are the new features introduced in the system (for this release)

What are the key/critical features of the system

Business perspective

Load/Volume perspective

What are the targeted components/systems/subsystems/resources as potential

performance impacting candidates

How the production volumes are and what is the pattern there

How the load will be simulated – ASM (Application Simulation Model)

    ASM says about production volumes/TPS/latencies/Resource usage

    Critical functions – use cases

    Script steps and required data at each step, annotated with

    appropriate naming conventions for transactions

    Production load patterns(to identify avg, pk, soak,spike …) and usage patterns

    Load test scenarios(how rampups, steady states and cool down)

Monitoring setup

Preparing a TERP document

How the production system is and how the pre production system is

How they differ and at what scale they are

Requirement for stubs to replace systems/subsystems/components not exists in preprod.

Monitoring requirements

Load testing tool infrastructure and connectivity

Access requirements

Data requirements i.e. the base data that needs to be present across the systems to test correctly

                                   

Performance Test Plan                       

Now all the theoretical stuff is over. Need to talk about the implementation.

So it contains,

A summary/background of the project

Performance Test objective

Performance Test requirements

In scope and out of scope

This includes the systems that are included in scope for testing

The features of the application included in scope

The requirements that are kept out of scope

The kind of test scenarios kept outside of scope

Boundaries of performance testing being conducted

Assumptions

Test strategy

Risks

Constraints

Dependencies (RACI matrix)

Test execution schedule – a detailed one i.e. includes every hour detail

Deliverables mapped with schedule

Support team contacts

Communication mechanism i.e. which teams to contact for what and what protocols need to follow

What’s next

Glossary

Appendix

 

Performance Test Execution Reports

It is a best practice to have every test execution report must be in an agreed format and distributed to a prior agreed team in order to avoid confusions and deviations from the problems and the objectives/goals. Performance testing is a very serious discipline and have a very right to stop the product being delivered at the last minute. Hence there will be always pressure on the PT team to get deviated from the results and findings.

 

Test results analysis and tuning

Identifying the bottlenecks by analyzing the results is a very interesting task and artistic too. It is an art because this activity involves elimination of some findings from analysis and keep focused on the critical ones because the more the data/findings the more chances to get confused.

Analysis can start with the basic tool reports i.e. latency, tps, hits, connection, throughput, vuload, resource usage, network usage, and conjunct with webpage and diagnostics (all big list i.e. db, jvm, threads, gc, memory, processor, profiling -- transaction drill down, method calls, classes, thread states, thread stacks, heap usage, memory analysis, …)

 

DSRs              

Identify who the stakeholders are

High level status --  steady/declining/improving followed by colors. It can be some times red and steady say

schedule slippage but stability is there .. like that.

Individual task wise status

Risks

Issues

Defects

 

TSRs  

What it should contain

Can Go Live – Yes/No

A summary of what this program about and what has been done as part of performance testing

Objective/s met ?

Performance requirements met ?

Key findings

Recommendations

Analysis :

Output of all the above stated analysis activities

Test execution register

Comparison reports

Latency comparison                -- report statistically(90%, avg, std, weighted avg ) and say the % improvement or degradation            because this says ultimately how fast your system is on what features and say about the user ‘feel good’ parameter is J, L  or ‘no change’

Resource usage comparison     -- always a picture worth thousand words. So show graphically how the improvement/degradation comparatively

System behavior comparison – eg: show the shift in break point graphically for and an indication on the capacity

                       

 

 

Then how performance engineering is done ?

 

If you have followed what I said initially about performance engineering, then you must understand performance testing is a part of performance engineering space. For those who think on a practical sense, performance testing bounded and supported by performance engineering activities. Hence, some of the activities stated in the performance test strategy fall under performance engineering activity list.

The other tasks that falls under PE space are the next steps after conducting the tests i.e. tuning the system for optimal performance.

The other parallel tasks of PE are tunneling the performance bottlenecks identified during performance testing towards design phase and provide architectural guidance to the solution architects to avoid probable performance issues or bottlenecks in the future.