Wednesday, November 30, 2016

Weblogic Monitoring Sessions



Here is the sample script to monitor weblogic server app level session counts. This helps in understanding which apps the sessions are being created the most and how frequent etc., and this can be extended to get session time details as well.

These stats can help in estimating the session memory footprints like sizing a JVM heap.



def getSessionStats():
 connect(adminname,adminPd, url);
 servers = domainRuntimeService.getServerRuntimes();
 timestmp = datetime.datetime.utcnow();
 for server in servers:
    apps = server.getApplicationRuntimes();
    for app in apps:
        crs = app.getComponentRuntimes();
        for cr in crs:
            if (cr.getType() == 'WebAppComponentRuntime'):
                snm = cr.getName();
                opens = repr(cr.getOpenSessionsCurrentCount());
                totals = repr(cr.getSessionsOpenedTotalCount());
                highs = repr(cr.getOpenSessionsHighCount());
                fo.write(str(timestmp)+','+snm+','+opens+','+totals+','+highs+'\n')

import datetime
adminPd=''
url='t3://...:'
adminname='faadmin'
fo=open("filepathtoopen","a+")
fo.write('Time, Name , OpenSessionCount, TotalSessions, SessionCountHigh \n')
AdminPorts=[ '' ]
for port in AdminPorts:
        urltoconnect=url+port
        getSessionStats()

fo.close()

Tuesday, October 18, 2016

JDK 8 - HOTSPOT JVM


With JDK 8 getting adopted vastly, I just want to give a high level view of hotspot memory management and the garbage collection techniques


Quickly, about Garbage Collection 
Allocates objects to young generation
Promotes aged objects to old generation
Marking old generation objects
Recovering space by removing unreachable or compacting live objects


Available collectors in Hotspot JVM
1.       Serial collector – for single processor and low concurrent apps. -XX:+UseSerialGC.

2.       Parallel collector (or throughput collector) – by default one on the server machines and medium to large scale apps. User -XX:+UseParallelGC. Its multithreaded GC. Select this when high throughput is needed.

3.       Mostly concurrent collector – performs most of its work concurrently i.e. app is still running. Select this when faster response times are needed. Since the app is running most of the time without any pauses, means RTs are not impacted often by the pause times. There are 2 types of mostly concurrent collectors – Concurrent Mark & sweep, G1 (garbage first gc – a generational algo preferred for heaps more than 4GB).

*throughput is termed as time spent in GC vs time spent in app. The more timespent is app means it has high amount of resources to deliver. 
* for generational GCs i.e. GCs with young,eden,tenured generations, both young and major gcs do pause the app in all types of GC algos. Minor/young gc happens very fast but a major gc happens when there is no space for tenured objects, so it involves entire heap collections i.e. identifying the reachable objects, marking and sweeping unreachable objs and compacting the space to avoid fragmentation. So it pauses longer time. CMS, G1 tries to reduce this by doing them always in the background while app is running and pausing app only in certain times and for short duration

Tuning first steps:
Go with the defaults
-          serial for single core machine,
-          parallel for high throughput requirement i.e. where App’s RTs can be acceptable at a bit higher sec that means giving longer pause times i.e. less frequent pauses – overall, the app will be paused at times but longer (means high throughput but breaches RT as it causes spike) and the parallel GC does its work with multiple threads in parallel to each and consumes CPU.
-          Mostly concurrent – on large apps, multi processor machines, when RT is important. Since RT is important, the pause times are less but may be frequent however, some part of the gc work like marking can happen without any pause. Use CMS when heap  when heap less than 4gb else use G1
-          Once the default is selected, check whether the requirement is met else, try playing around increasing the XMX. Then try tuning the heap areas.


Parallel Collector:
-          -XX:+UseParallelGC
-          both minor and major collections are executed in parallel
-          number of threads is calculated as 5/8 * N (number of HW threads) or can be set manually as -XX:ParallelGCThreads=
-          since higher number of threads causes fragmentation when promoting objects from young to tenured spaces, try reducing the number or increase tenured space to cope up with freagmentation.
-          parallel collector tunes automatically based on behavior. So no need to specify generation sizes or granular level tunings. Behavior is mentioned are max gc pause time, throughput, foot print of heap size.
-          Max pausetime target:          -XX:MaxGCPauseMillis=
-          Throughput target: -XX:GCTimeRatio=  i.e. it tries to spend 1/(1+N) percentage in GC. By default is 99 i.e. i/(1+99) = 1% in GC , 99% leaving it to app
-          Footprint : is basically the –Xmx
-          parallel collector tries to meet max pausetime first then throughput then footprint targets. It plays around growth/shrink percentages of generations to achieve these targets.

Mostly concurrent collectors:
CMS: if the importance is faster RT i.e. lesser pause times and can afford CPU
-          -XX:+UseConcMarkSweepGC.
-          reduces pause time by always keeping few threads to concurrently (i.e. without pausing the threads) to mark the reachable & unreachable objects and sweeping of unreachable objects but pauses during a major gc during movement of references. It always tries to keep the tenured space clean to avoid longer pauses or accumulation of objects. When it fails keep up the tenured space free then a full collection happens with all the threads paused (a failure case)
-          In CMS, the tenured and young can happen independently as they have different threads running in concurrent to the application.
-          CMS output is a bit different to other GC outputs. -verbose:gc and -XX:+PrintGCDetails (-XX:+PrintAllGCDetails to print more details). It has below output
       CMS-initial-mark indicates the start of the concurrent collection cycle
                                 CMS-concurrent-mark indicates the end of the concurrent marking phase
CMS-concurrent-preclean
                CMS-remark
                                                                                CMS-concurrent-sweep marks the end of the conc sweeping phase
                                CMS-concurrent-reset (getting ready for next collection)
-          CMS and Parallel does compactions for the whole heap when there is no consecutive space available

G1: for large heaps and pause times targets can be met at higher probability and also achieving high throughput (i.e. more time given to app)
-       heap will be partitioned into a set of equally sized heap regions, each a contiguous range of virtual memory. the algo performs a concurrent global marking phase to determine the live objects of the heap. After the marking phase completes, it collects the mostly empty regions first, to yield a large amount of free space. So it is called Garbage-First. 
-          This always works to reduce fragmentation by compacting during collections.
-          G1 is beneficial when there is large amount of live data i.e. 50% of heap, allocation rate changes, when there is long collection time or if there is long compaction times

Default configuration on server class machines
On server-class machines, the following are selected by default if not specified otherwise.
Throughput garbage collector
Initial heap size of 1/64 of physical memory up to 1 GB
Maximum heap size of 1/4 of physical memory up to 1 GB
Server runtime compiler


And, to view the default configurations use java -XX:+PrintFlagsFinal -version  


Sunday, October 2, 2016

JMeter Solr Banana


Want to give some colors to JMeter ? We all know JMeter is a great tool and helps load testing the apps.. how about bringing two more awesome tools on to the table - Solr and Banana

With Solr, we can store/index/search large amount of data and Banana is a pretty tool with lot of latest HTML and javascript capabilities to draft some cool graphs to show the trends.

So, how we can use these tools ..

we can do a lot in fact, but to start with,

- Load test results to display the runtime stats - Response times/Transaction throughput/Bytes and the list goes on..
- Monitoring system resources like Memory/CPU/Disk/Load, JVM garbage collection activity etc.,

So, its all sounds interesting ? and do you think we can really build a good monitoring tool ? Well,  below are couple of dashboards

Load Test Report:



System Resources


These are few sample dashboards. The actual setup does a lot more.. the remote agents collect the data and pushes them to solr and the dashboard refreshes the stats.



Saturday, September 24, 2016

Sizing JVMs and VM Memory



How to estimate the amount of JVM heap and host memory requirement in complex cases where there are more JVM instances per VM and more apps per JVM and have different usage requirement per app ?

let's see some of the metrics to collect to construct an equation on estimating the sizes.


  • list the apps per instance. Ideally there will be multiple apps deployed on one instance of JVM and all these apps are not equally used and each app will have its own characteristics thus have uneven memory requirements i.e. based on req. classes, code logic and constructs etc.,
  • And identify the 'typical' transactions.. ideally, in any app, 20% of features are executed 80% of times
  • Take a first measurement - server start up heap. Once the server is up completely, get the first point after a full GC (hotspot) or first OC (jrockit)
  • Now, ideally the first user access is demanding one as it loads lot of stuff. So, do just logins into each of the app and take a note of OCs/FGCs for each user. So, per app, the numbers gives the amount of heap required for first login
  • So, XMS - to make it good enough, it can be sum of (server startup heap + sum of all above deltas )
  • Now, exercise the typical 20% transaction as single user per app and do not logout - so these are typical active users on the system - call these deltas as h1, h2 etc . This can be on a warmed up server but best is to have no other logged in users in the system. Each delta can be noted or take the final delta after all different app users are in and executed the typical flows but not logged out.
  • XMX i.e. the max amount of memory can be XMS + ( total delta * (%app1 conc. users)   +  ... )

for hotspot jvms or other jvms where there are more heap spaces like young/survivor, perm etc., they can be calculated as proportionate ratio of XMX ..

And, typical host memory usage requirement can be calculated as approx 1.8 times of XMX

If someone can simulate load and want to bare the cost, time and complexity of all setup to do that then it is probably best to get the numbers based on test results.., the benefits in doing so can be not just limited to JVM heap but gives the other resource pools across the layers..

Wednesday, September 14, 2016

Mobiles - The way they transformed made things reachable to some but created complexity to others


I am talking about the quality checks for the applications built for mobile devices. Mobile devices transformation has to be considered as the fastest change in technology so as the complexity in delivering applications on this new platform which is becoming a must !

Besides making sure the features work i.e. testing the functionality, it is the most important task to ensure the app actually perform as expected. someone can say what is there to think about as it is a light weight code .. there is infact a lot that could disrupt the performance - examples - have to support different vendors like android/APPLE, version of operating systems, underlying hardware, resolutions, GPU, CPUs, memory, network carriers and subscribed bandwidths like 2G/3G/4G, inter-apps interruptions, backend thread support for async calls .. what not !

Well there is a lot, I can think of how one can approach to validate performance of the mobile applications and the upstream servers by using some of the great tools in the market.

Below are some of the tasks to evaluate performance and what these tools can do..


  • Test and monitor the app's performance on different emulators. If it has to be done independently then each IDE is needed like android studio, IOS IDE and respective skill set as well. Perfecto addresses this with its support to multiple devices 
  • Test the app's performance on different real devices- can't do it manually.. can not have a farm of devices.. perfecto can do it using its mobile device cloud.. or its emulator is good enough as well
  • Simulate apps performance under different network speeds on devices/emulators - can do by using various softwares like act but again perfecto along with shunra integration can manage devices by emulating various network bandwidths
  • Capture the traffic and simulate 1000's of devices load to app servers - this can be done using emulators and tcp capture.. but perfecto-shunra can do and the captured traffic can be fed to Loadrunner. And, Shunra can virtualize networks to emulate load from different locations/carriers/bandwidths during load tests
  • At the same time one would have to recheck the app's performance on the device while server under load  - again by perfecto

Its a different world now with mobiles/tablets/wearables that move .. Apps need to support all these !
Gone are those days where apps are accessed from a standalone PC ..



Wednesday, August 24, 2016

End to end request processing time


In a simplistic view, in a typical online system, this is where one needs to check to know any slowness




Browser rendering time - if the page size too big or too complex with java scripts and style sheets
then the rendering time could take time..
# of static content being used on the web page has an impact on the overall load time. And, if they are not cacheable, the round trips even from a CDN could have an impact on the load time
Total download time for a page depends on how big is the response content and how many susbrequests are triggered part of the page and the network time and entire server side processing time
Page load time is the time by when user sees the page.. so above all have impact on it.

coming to network, it is important to know what is the path the request is traversing from client to the server. Is it taking longest path via CDN or how the addresses being resolved over public internet and any proxy being used etc., the network delay and packet loss are important factors to keep an eye on.

coming to server side, the request processing time depends on many factors.. it depends on underlying infrastructure, resources, architecture, code logic implementations etc.,
but there are few check points to look at to break it down..
checking at http server layer tells the variation between end page load time and total server time. This helps identify any network delays.
checking the difference between http server to app server times tells if there is any delay in the middle layers like authentication routing..
on the app server side, it could spend time in many places.. the routing can happen to many servers or even to external systems. using runtime instrumentation tools, it is possible to break down the time spent in pure code, wait times due to synchronous code blocking, gc pause times, time spent in reading from sockets/wirte to while interacting with DB or in the calls to other servers while making remote calls like rjvm to EJBs or service calls etc., by breaking this way, each underlying activity and the delays can be identified

Each of the above ones are not more than just an index to an ocean of tuneable metrics that each underlying technology modules contains. However, where to look at and what to tune is the key.



Application performance - odd bits



we do normally warm up the systems and concerned about the how it performs under load but will it meet customer expectation or improve customer experience or avoid frustrating scenarios ?
how about the 'first time' and idle case performances ?

did you ever check what is the single user response times and request processing times on server side ? if it is not performing any good for single user then it will not do for multiple. The base performance for single user is what needs to tuned first

what about first time cases ? we can not ask users of the system to hang on until system warms up ! then who will use it first :)  so, analysing cold case system performance is important. it is important to know what happens on first access across the layers in cold cases i.e. after a system restart, app servers or entire VMs or DB etc., and what needs to be tweaked to get better first time performance.

how about sleeping systems ! not all systems work round the clock atleast not all days in a week.. so did you ever check what is the resource usage of idle systems ? there could be some unexpected code path executions might happen even under no load case. So it is important to know the resources usage under no load as they might also run all the time.

how about apps performance under a very slow network access or networks with high packet loss. how reliable are the systems..

what happens when infrastructure fails ? how many users can still continue doing what ever they are doing with out any interruption and feel the same performance of the app while a DR process kicks in..

how about the systems with long running sessions.. users may not logout for long time and it is required to keep the sessions so long and what is the impact !

how the applications handle when majority of end users do not logout but just closes the browser ! how to handle the memory usage in those cases..



Saturday, August 20, 2016

Java EE - EJB


Enterprise Java Beans - EJB

EJBs are the Java EE server side components which implement app's business logic. They normally be deployed in EJB containers provided by the app servers. Implementing EJBs can provide scalability to the application and better handling of security and transactions. EJBs can also implement webservices.

Types of EJB:
Session - To implement user actions
- stateful: maintains a state for the client and this bean can not be shared.
- stateless: does not maintain state so can be allocated to any clients. They are reusable. So this
        pool will be less compared to stateful.
- singletone session beans: have only one instance for the whole time.  They can get initiated when the app starts. Although they act as stateless but there is no pool because of single existence

  Message Driven Beans (MDB): for listening to the messages either from queues or JMS
(if someone read about entity beans, they are now part of persistence API..(referring to Java EE7)

Implementation is simple.. infact, the annotations made it so..lets says to write a simple stateless EJB, just annotate the class with @Stateless and implement the business logic.. and to call this EJB, a servlet can do ..ofcourse, you need to annotate with @WebServlet(urlPatterns="/") the url pattern says the context root. And, as usual extend the class with HttpServlet. And, to access the EJB, just annotate with @EJB and then the declare the EJB instance.

However, in a typical EJB implementation, there could be all type of session beans - stateless, stateless bean implementing a service, stateful bean accessed remotely etc.,

A remote interface is required for the beans which allow remote access. This remote business interface defines all the business methods of a bean and annotated with Remote from javax.ejb pacakge and it gets implemented by the session bean. A session bean can be an end point for a web service.

A stateful session bean can have methods annotated with Remove which can be invoked by client to remove the instance.

In the case of singleton bean, the concurrent access from the clients can be controlled in two ways - container managed or bean managed by annotating accordingly. And, the methods must be annotated with the locktype i.e. read or write so that concurrent access can allowed or provided with a synchronous mechanism respectively..

For stateless session beans to implement service end points, they must be annotated with @Webservice and the business methods that are exposed must be annotated with @Webmethods. They can also implement async methods so that clients no need to wait for response from long running methods.

Coming to EJB pools - in weblogic, there is an element called max-beans-in-free-pool in weblogic-ejb-jar.xml. This determines how many EJBs must be made available in free pool. max-beans-in-pool will put a cap on the pool limit. For MDBs, the container will create as many instances required based on the size limited to max-beans-in-free-pool. Default MDB threads are 16 but this can be changed by having custom queue or workmanagers

Sunday, August 14, 2016

Oracle VM


Virtualization is a technique to share the hardware resources among multiple systems or users to achieve optimal usage of resources and reducing costs.

Although virtualization is a generic one conceptually, lets talk on server virtualization. This means a bunch of HW resources like CPUs, Memory, Disks, ports etc., are shared among multiple OSs either of same type or multiple type. So, to achieve this we need someone or something to manage the underlying HW and above running guests (OSs). This 'manager' is what is called a 'Hypervisor'.

There are couple of types of Hypervisors
Native or bare metal hypervisor - this is the software which directly runs on host's hardware to control the hardware and monitor the guests OS. so imagine this something that mediates between guest OS and underlying hardware. Example of
such implementation are Oracle VM, VMware EXXi Xen, Microsoft Hyper-V

The other hypervisor is made to run within a traditional operating system and then guest OSs can run on top of it. Example Oracle VirtualBox (which can be installed on an PC where windows is the base OS but virtual box can then host another guest OS like linux..


Oracle VM Server:

This can be installed on X86 instruction set based platforms with Xen hypervisor (GPU licensed) or on SPARC platforms (which will have its own hypervisor).
In general, the above implementation has their own firmware/hardware, a hypervisor and then a super domain/vm which controls the resource allocation to other guest VMs (also called domains or simply guests)

So, simply Oracle VM server is a collection of hardware (CPU, Mem, Network, IO etc.,), hypervisor (for managing underlying baremetal i.e. the hardware), domains (the VMs with thier own set of OS except Dom0 which a complete linux kernel and manages all the other Domains).

Lets explore some interesting things related to Oracle VM

CPU capacity:
how to determine the cpu capacity on a vm server
xm info is the command to use. for example, as shown below, the number of cpus are 72 which are ideally the threads. There are 2 nodes, 18 cores per socket and 2 threads per core
i.e. 2 * 2 * 18 = 72 threads (0 to 71 total, 0-35 on sock1, 36-71 on sock2)

nr_cpus                : 72
nr_nodes               : 2
cores_per_socket       : 18
threads_per_core       : 2

The cpu topology can be viewed by using the commnad xenpm get-cpu-topology
CPU     core    socket  node
CPU0     0       0       0
CPU1     0       0       0
CPU2     1       0       0
CPU3     1       0       0
..
xm info also gives the high level vm details like what bit it supports, what instruction set (like intel x86), number of real cpus, number of nodes, number of sockets, number of threads per core, cpu frequency,
memory, pagesize etc.,
In an hyperthreaded model, each core will run 2 threads instead of one. and this would have counter effects but could improve efficiency..


vCPUs
virtual cpus are the cpus that are assigned to a guest/domu i.e. a virtual machine which runs on a domu can be assigned 10 CPUs which are considered as virtual cpus and the actual bindings to real cpu depends on how they are configured. For example below, vm1 is a virtual machine with the id=1, has 3 vCPUs which are in bind state and mapped to CPUs 3,6 7. This vm1 is configured to have the cpu affinity as 2-35 which is first socket on a 2 socket 72 core machine. so, since there is no absolute binding, the mapping can change in runtime and depends on the availability,
the vcpus can be mapped to any of the real cpus in the range 2-35.

xm vcpu-list
Name  ID  VCPU   CPU State   Time(s) CPU Affinity
vm1   1     0     3   -b-   5354.1 2-35
vm1   1     1     6   -b-   2312.4 2-35
vm1   1     2     7   -b-   2337.8 2-35

you can pin the CPUs for guest vms runtime but to change any affinity to dom0 requires a reboot.. and dom0 always takes the top priority.
And, it is always good to monitor the real cpu usage from the vm server to check how in an  oversubscribed case, the busy vms on the same socket could impact each other..

JNDI - Java Naming and Directory Interface




JNDI - Java Naming and Directory Interface

what is JNDI - its a naming service
why is it needed - in a distributed enterprise application, there are multiple resources like DB pools or business components like EJBs deployed on the Java EE containers and they need a way to locate. JNDI serves that purpose.

Applications can use annotations to locate the resource. Like datasources which are nothing but database resources, provides connection to database. when the application code refers to a
 datasource and invokes JDBC API to getconnection, it gets a physical connection. In case connection pooling is implemented then it gets a handle to pooled connection object instead of direct physical connection.
 These connections need to be closed and when closed will go back to the pool. The pool of database connections will give better performance and better connection handling mechanism.

Similarly JNDI mapping can be done to other services like JMS, LDAP etc.,

Below shown are some of the resources on Glassfish server and the JNDI mapping.








Monday, August 8, 2016

Oracle database performance


Lets explore Oracle database performance aspects on a high level..

some of the key terms to know about:

SGA: Shared Global Area - is basically a collection of memory units or structures used by all the processes on a db instance.
PGA: Program Global Area- is a memory region specific to a process (server process or bg process).
Buffer Cache: is basically a buffer to keep data blocks read from data files. The buffer cache is shared across the users.
Shared pool: basically contains the program data like parsed SQLs, PL/SQL code, data dictionaries etc., and this is accessed almost in every DB operation.

And lets see some of the interesting views
gv$process contains details on the currently active processes which are either on CPU or on latchwait or in spinning on a latch.. also contains PGA details for this process
gv$sgastat contains details on the system global area (SGA). For each pool - shared/large/java/stream pools
gv$session contains details for each current session. data from this view is sampled every sec and put into V$ACTIVE_SESSION_HISTORY. From 11G Rel 2 onwards, each individual req can be traced with the help of ECID.
gv$pgastat contains details on PGA usage

we can take a periodic snapshots on the above tables to analyze further..
example: CREATE SNAPSHOT snapshotonprocess AS SELECT * FROM gv$process
and then the processes pga can be calculated.. like select inst_id, count(*) cnt, Round(sum(PGA_USED_MEM) / 1024 / 1024 / 1024, 2) from snapshotonprocess group by inst_id order by inst_id
- adding a criteria like 'background is not null' will give pga stats for background processes

similarly, we can query sga stats, shared pools, shared pool usage, buffer cache usage and active sessions etc.,

Another good place to look at for session performance and to point out the slow SQLs or event waits is by querying V$ACTIVE_SESSION_HISTORY.
V$ACTIVE_SESSION_HISTORY contains sampled session activity in the database. samples are taken every sec. So, it is possible to calculate howmuch time spent on DB  and on which queries for a end user request by tracing the ECIDs in this view.

Lets look at awesome reports by Oracle database on performance of the database.

AWRs- A great performance report on DB workloads. It contains information about DB time, system resource usage, waits or other events that could impact performance, SQL statistics like long running SQLs or resource intensive sqls or
SQLs with high buffer gets.
we can check how the buffer cache and shared pool size changed from the beginning to the end of the snapshots.
logical reads(the more the better), physical reads(the less the better), hardparses(the more over a warmedup system indicates the plans are not good perhaps and might need to gather stats or something attached to the queries in runtime), rollbacks etc.,
latches(a short lived serialization methods that oracle uses on shared structures) efficiency metrics, top 5 foreground events, CPU and memory stats are good place to start with if there is a overall db performance hit.
If the performance is specific to SQLs then other areas to look at in AWRs are related SQL performance - # of execs, time taken per exec, cpu used, buffer gets, hard parses etc., And, up on identifying the SQLs (best way is to match the ECIDs for the frontend initiated SQLs),
SQLHC is next step to analyze further on the historical performance and the to analyze the SQL execution plan, indexes, bind variables/conditions etc.,

To have a better insight, keep a good baseline AWR in the system so that any future snapshot can be compared against it..

To create a awr snap:
EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;
select snap_id from dba_hist_snapshot order by snap_id asc;

To generate AWR: select output from table(dbms_workload_repository.awr_report_html(,,,));

To declare the basline:
BEGIN   DBMS_WORKLOAD_REPOSITORY.create_baseline (     start_snap_id => <###>,     end_snap_id   => <###>,   baseline_name => 'baseline'); END;
 /

To compare AWRs:
@$ORACLE_HOME/rdbms/admin/awrddrpi.sql
or
select * from TABLE(DBMS_WORKLOAD_REPOSITORY.awr_diff_report_html(,,,,,,,));


Other good reports to look at:
ADDM report - an Automatic Db Disgnostic Monitoring report.. helps identifying the issues in an Active Workload Repository

Saturday, July 16, 2016

Weblogic MBean Monitoring


Weblogic provides a nice feature to monitor the server resources i.e. through MBeans.

weblogic mbeans types - configuration MBeans, Runtime MBeans and Application Defined MBeans

These can be browsed from EM console - go to admin server EM url for the domain which you are interested. Navigate to the server under the domain from EM console and then menu - System MBean Browser

you can see the above 3 types of MBeans tree structure. Normally for runtime stats, we check Runtime MBeans. say  com.bea - select a server - ServerRuntime - and select a type of MBean to browse and select the server


Example MBean for JDBC Data Sources

MBean Name com.bea:ServerRuntime=Server_1,Name=Server_1,Location=Server_1,Type=JDBCServiceRuntime
There are many attributes for this MBean and interesting one to monitor the resource usage is 'JDBCDataSourceRuntimeMBeans'
This attribute will list all the data sources configured for the server - click on a datasource to see its current usage

This is same as what we monitor from Admin console - datasources - monitoring ..

The same can also be done via a script using python/wlst

------------------------------------------------------------------------------------
def getConnectionPoolStat():
connect('adminUserName', 'adminPassword', 'adminURL')
name=cmo.getName()
domainRuntime()
cd('ServerRuntimes')
srvrs=domainRuntimeService.getServerRuntimes()
crntdr=pwd()
for srv in srvrs:
srvName=srv.getName()
cd(crntdr)
print srvName
cd(srvName +'/JDBCServiceRuntime/' + srvName)
allDS=cmo.getJDBCDataSourceRuntimeMBeans()
cd('JDBCDataSourceRuntimeMBeans')
for ds in  allDS:
dsName = ds.getName()
cd(dsName)
lcc=cmo.getLeakedConnectionCount()
ccy=cmo.getCurrCapacity()
accc=cmo.getActiveConnectionsCurrentCount()
frrc=cmo.getFailedReserveRequestCount()
ctc=cmo.getConnectionsTotalCount()
acavg=cmo.getActiveConnectionsAverageCount()
cchc=cmo.getCurrCapacityHighCount()
achc=cmo.getActiveConnectionsHighCount()
ftrc=cmo.getFailuresToReconnectCount()
st=name + ',' + dsName + ',' + srvName + ',' + str(acavg) + ',' + str(accc) + ',' + str(achc) + ',' + str(ctc) + ',' + str(ccy)+','+str(cchc)+ ',' + str(lcc) +',' + str(frrc)+','+ str(ftrc) + '\n'
fo.write(st)
cd('..')


fo=open("outputcsvfile","w+")
fo.write('DomainName , DataSourceName, ServerName, Active Connections Avg Count, Active connections current count, Active Connections high count, Connections Total Count, Current Capacity , Current High Capacity, Leaked Connection Count, Failed ReserveRequest Count, FailuresToReconnectCount  \n')
getConnectionPoolStat()
fo.close()

This needs to be executed from wlst in oracle common home
sessions per app can be collected through weblogic_j2eeserver:app_session..