Inbound conversion from EBCDIC->ASCII is done by PE, whereas outbound conversion (ASCII->EBCDIC) is done by AMP
A PE supports 120 sessions. Gateway supports max of 1200 sessions. Therefore max of 10 PEs per node. Major functions, Session control, Parser, Optimizer, Dispatcher
A session can have upto 16 requests, only one can be active though
Parcels to TD: [Request, Data (0 or 1), Respond] or [Respond (only for cont)]
Parcel from TD: Success/Fail, Record (0 or more)
Request-To-Steps Cache is per PE and available to only that PE
DDL or EXPLAIN pans are not cached in RTS
parameterized queries get stored in RTS.
non-parm queries goto Text area first, second time they are moved to RTS
PEs purge unmarked plans (UPI,USI, Nested joins are marked since they are demography independent) from RTS cache every 4 hours
DD cache stores DD info and is purged (except entries with STATS) every 4 hours.
AMPs send results back to dispatcher except for
ß - express queries (PI equality and no SI or fallback) which sends data to user
Optimizer generated steps: Serial, Parallel. For MSR only, Individual and Common
AMP may override optimizer on which and how many are actually run in parallel
An MSR with many single-AMP stmt, partially serial op. PE must receive confirmation that a stmt acquired an AWT before it sends out the next statement.
An insert array, for example multiple rows with USING clause is better than MSR
TD12+: Optimizer can generate Specific plan, which are not cached, instead of Generic plans by peeking at USING values from data parcel. Exception is CURRENT_DATE which causes specific plan generation and is cached until date changes
Express Requests are internal requests (by PE) sent directly to AMP.
Express requests can be expedited to Work09 AWT if reserved AWTs available.
Only used if there is PI or USI access (since sent directly to AMP)
dbc.QryLogV.CacheFlag is used for parameterized query
S:Specific plan - when parameterized query is seen for the first time
A:Always specific plan - when 2nd time and parsing time is insignificant
G:Generic plan - when parm query seen 2nd time and not A
T:Query cache - third time and if plan is found in the cache
AWTs/AMP can be increased in cases such as reserved expedited AWTs, but used rarely
AWTs are released at the end of final query step before BYNET merge begins.
BYNET merge process briefly acquires AWTs to move data from spool to buffer
When a query is demoted from expedited Work type, it will keep the AWT until step finishes and then request a new AWT from Work0 pool.
Tactical queries might still be able to run under flow-control since they have their own mono-cast queue (single or group-amp) v/s all-amp newwork queue
FlowCtlCnt is incremented only when AMP enters Flow control
If FlowControlled = 1 and FlowCtlCnt = 0 => perpetual flow control
The message queue in the AMPs is ordered by the type of message and its priority, then the consumption of resources of the workload is taken into consideration, and finally the time of arrival.
ROT: Reserve expedited AWTs for your worst case scenario not average case scenario (OB AWT by Carrie Ballinger)
ROT: Make InUseMax primary metric; supplement with FlowCtlCnt monitoring
ROT: FlowCtlTime > 100 on most AMPs => Reduce concurrency
Tactical tier has no share because it is always allocated practically all resources
Tactical WD have two exceptions: {Per node,total} {CPU,IO}.
If only per node is exceeded then request gets demoted only on that node. It is then possible that query might be running in two different workloads on different nodes
FinalWDID is changed only when sum-over-all-nodes limit is breached
SLG: Share % is allocated at Workload level => divided evenly between all requests
Unused resources are offered to other workloads on the same tier, then the tier immediately above it
A given workload can consume more resources than allocated if lower tiers cannot use theirs
Shares of an inactive WD are first distributed to other WD on the same tier before "Remaining"
I/O Prioritization by TDSched module
Physical I/O is measured by bandwidth rather than I/O counts
4 I/O queues: Critical > Expedited > Normal > FSG flush. Last 2 can be preempted
IO can be combined (merged) to reduce disk arm movement even if they are of different priorities as long as they are from the same priority queue.
IO prioritization takes place once disk queue exceeds 16 outstanding requests
schmon workload type: TA, WS/n and TS for tactical, SLG tier n and timeshare
output reflects usage over 1 second by default
Minimum CPU time-slot is 0.05 ms.
Tactical use expedited AWTs automatically. SLG1 WD can be made to use expedited AWTs
Hard limit can be applied to VP or SLG WD
VP has one Fixed limit that is applied to both CPU and IO, affected by COD
WD hard limit is always % of total capacity without regards to COD