resmgr:cpu quantum等待

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:resmgr:cpu quantum等待

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

昨天晚上数据库升级(使用exp/imp从9i升级到11g),开启业务,数据库出现很多resmgr:cpu quantum等待

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

SQL> SELECT sid,event FROM v$session WHERE wait_class#<>6;

       SID EVENT
---------- ----------------------------------------------------------------
        27 resmgr:cpu quantum
        38 resmgr:cpu quantum
        43 resmgr:cpu quantum
        46 resmgr:cpu quantum
       113 resmgr:cpu quantum
       118 resmgr:cpu quantum
       125 resmgr:cpu quantum
       140 resmgr:cpu quantum
       143 resmgr:cpu quantum
       199 resmgr:cpu quantum
       205 resmgr:cpu quantum

       SID EVENT
---------- ----------------------------------------------------------------
       212 resmgr:cpu quantum
       220 resmgr:cpu quantum
       221 resmgr:cpu quantum
       223 resmgr:cpu quantum
       238 resmgr:cpu quantum
       241 resmgr:cpu quantum
       301 resmgr:cpu quantum
       313 resmgr:cpu quantum
       314 resmgr:cpu quantum
       405 resmgr:cpu quantum
       410 resmgr:cpu quantum

       SID EVENT
---------- ----------------------------------------------------------------
       415 resmgr:cpu quantum
       435 resmgr:cpu quantum
       502 resmgr:cpu quantum
       503 resmgr:cpu quantum
       509 resmgr:cpu quantum
       510 resmgr:cpu quantum
       512 resmgr:cpu quantum
       521 resmgr:cpu quantum
       526 resmgr:cpu quantum
       528 resmgr:cpu quantum
       532 resmgr:cpu quantum

       SID EVENT
---------- ----------------------------------------------------------------
       533 enq: TX - row lock contention
       589 resmgr:cpu quantum
       596 resmgr:cpu quantum
       600 resmgr:cpu quantum
       609 resmgr:cpu quantum
       611 resmgr:cpu quantum
       625 resmgr:cpu quantum
       635 null event
       707 resmgr:cpu quantum
       727 resmgr:cpu quantum
       731 SQL*Net message to client

44 rows selected.

查询alert日志

Sat Jun 09 06:00:00 2012
Setting Resource Manager plan SCHEDULER[0x310C]:DEFAULT_MAINTENANCE_PLAN via scheduler window
Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
Sun Jun 10 02:00:00 2012
Closing Resource Manager plan via scheduler window
Clearing Resource Manager plan via parameter

Sun Jun 10 06:00:00 2012
Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
Mon Jun 11 02:00:00 2012
Closing Resource Manager plan via scheduler window
Clearing Resource Manager plan via parameter

Mon Jun 11 22:00:00 2012
Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
Tue Jun 12 02:00:00 2012
Closing scheduler window
Closing Resource Manager plan via scheduler window
Clearing Resource Manager plan via parameter

Tue Jun 12 22:00:00 2012
Setting Resource Manager plan SCHEDULER[0x3108]:DEFAULT_MAINTENANCE_PLAN via scheduler window
Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
Wed Jun 13 02:00:00 2012
Closing scheduler window
Closing Resource Manager plan via scheduler window
Clearing Resource Manager plan via parameter

从这里可以看出来,因为SCHEDULER定时启动和关闭资源管理的DEFAULT_MAINTENANCE_PLAN从而导致在晚上10点到2点Resource Manager plan处于启用状态.上线测试刚好在晚上2点之前,所有当时查询的时候发现很多resmgr:cpu quantum等待是因为Resource Manager plan启用导致(使用SCHEDULER控制其启用和关闭),很多情况下数据库跑的应用比较单一,不是十分的需要启动资源管理.
在11g中关闭方法如下

1. Set the current resource manager plan to null (or another plan that is not restrictive):

alter system set resource_manager_plan='' scope=both;

2. Change the active windows to use the null resource manager plan (or other nonrestrictive plan) using:

execute dbms_scheduler.set_attribute('WEEKNIGHT_WINDOW','RESOURCE_PLAN',''); 
execute dbms_scheduler.set_attribute('WEEKEND_WINDOW','RESOURCE_PLAN','');

3. Then, for each window_name (WINDOW_NAME from DBA_SCHEDULER_WINDOWS), run:

execute dbms_scheduler.set_attribute('<window name>','RESOURCE_PLAN','');

SQL> select WINDOW_NAME  from DBA_SCHEDULER_WINDOWS;

WINDOW_NAME
------------------------------
MONDAY_WINDOW
TUESDAY_WINDOW
WEDNESDAY_WINDOW
THURSDAY_WINDOW
FRIDAY_WINDOW
SATURDAY_WINDOW
SUNDAY_WINDOW
WEEKNIGHT_WINDOW
WEEKEND_WINDOW

9 rows selected.
此条目发表在 Oracle性能优化 分类目录,贴了 标签。将固定链接加入收藏夹。

resmgr:cpu quantum等待》有 4 条评论

  1. 惜分飞 说:
    lter system set resource_manager_plan='' scope=both;
    execute dbms_scheduler.set_attribute('MONDAY_WINDOW','RESOURCE_PLAN','');
    execute dbms_scheduler.set_attribute('TUESDAY_WINDOW','RESOURCE_PLAN','');
    execute dbms_scheduler.set_attribute('WEDNESDAY_WINDOW','RESOURCE_PLAN','');
    execute dbms_scheduler.set_attribute('THURSDAY_WINDOW','RESOURCE_PLAN','');
    execute dbms_scheduler.set_attribute('FRIDAY_WINDOW','RESOURCE_PLAN','');
    execute dbms_scheduler.set_attribute('SATURDAY_WINDOW','RESOURCE_PLAN','');
    execute dbms_scheduler.set_attribute('WEEKNIGHT_WINDOW','RESOURCE_PLAN','');
    execute dbms_scheduler.set_attribute('WEEKEND_WINDOW','RESOURCE_PLAN','');
    
  2. 惜分飞 说:

    可以设置隐含参数关闭,不过需要重启生效

    SQL> alter system set "_resource_manager_always_on"=false;
    alter system set "_resource_manager_always_on"=false
                     *
    ERROR at line 1:
    ORA-02095: specified initialization parameter cannot be modified
    
    
    SQL> alter system set "_resource_manager_always_on"=false scope=spfile;
    
    System altered.
    
  3. 惜分飞 说:

    High “Resmgr:Cpu Quantum” Wait Events In 11g Even When Resource Manager Is Disabled

    Applies to:
    Oracle Server - Enterprise Edition - Version 11.1.0.6 to 11.1.0.7 [Release 11.1]
    Information in this document applies to any platform.
    ***Checked for relevance on 18-NOV-2011***
    
    Symptoms
    Issuing a sqlplus / as sysdba might be hanging and/or high waits on event 'resmgr:cpu quantum' 
    might be noticed even when resource manager is disabled.    
    
    You already have confirmed parameter RESOURCE_MANAGER_PLAN is set to null 
    but still noticing the above wait events.
    
    Top 5 Timed Foreground Events:
    
    Event                    Waits   Time(s)  Avg wait(ms) % DB time Wait Class
    
    ------------------------ ------- -------- ------------ -------------- ---------- -----------
    
    
    resmgr:cpu quantum         1,596  346,281       216968          89.19 Scheduler
    
    
    db file scattered read   171,071   14,778           86           3.81 User I/O
    
    
    log file sync             28,575   10,810          378           2.78 Commit
    
    
    db file sequential read  943,457   6,569             7           1.69 User I/O
    
    
    DB CPU                     2,133   0.55
    
    Cause
    This could be due to DEFAULT_MAINTENANCE_PLAN. From 11g onwards every weekday window has 
    a pre-defined Resource Plan called DEFAULT_MAINTENANCE_PLAN, which will become active 
    once the related window opens.
    
    Following entries can also be noted in alert log at the time of issue.
    
    Wed Sep 16 02:00:00 2009
    Clearing Resource Manager plan via parameter
    :
    Wed Sep 16 22:00:00 2009
    Setting Resource Manager plan SCHEDULER[0x2C55]:DEFAULT_MAINTENANCE_PLAN via scheduler window
    Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
    Wed Sep 16 22:00:05 2009
    Begin automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK"
    
    Solution
    To disable the DEFAULT_MAINTENANCE_PLAN you can use the below steps as suggested in Note 786346.1
    
    1. Set the current resource manager plan to null (or another plan that is not restrictive):
    
    alter system set resource_manager_plan='' scope=both;
    
    2. Change the active windows to use the null resource manager plan (or other nonrestrictive plan) using:
    
    execute dbms_scheduler.set_attribute('WEEKNIGHT_WINDOW','RESOURCE_PLAN',''); 
    execute dbms_scheduler.set_attribute('WEEKEND_WINDOW','RESOURCE_PLAN','');
    
    3. Then, for each window_name (WINDOW_NAME from DBA_SCHEDULER_WINDOWS), run:
    
    execute dbms_scheduler.set_attribute('<window name>','RESOURCE_PLAN','');
    
  4. 惜分飞 说:

    In Oracle 11 the automatic maintenance jobs (Space Advisor, Gather Stats, Tuning Advisor) are by default run using the Resource Manager. The default settings give these tasks up to 25% of the CPU during their scheduler windows. If you prefer these tasks to always be taking a “back seat” compared to any other jobs running on the system it may be worth slightly adjusting the resource allocation of the DEFAULT_MAINTENANCE_PLAN.

    In the DEFAULT_MAINTENANCE_PLAN (the default version) we have:
    Level 1: SYS_GROUP: 100%
    Level 2: ORA$AUTOTASK_SUB_PLAN 25% / ORA$DIAGNOSTICS: 5% / OTHER_GROUPS 70%
    Level 3+: none

    This means that OTHER_GROUPS may only get 70% of the CPU (unless the other plans are not using up their quota, in which case some of the spare quota will be available to OTHER_GROUPS). To ensure that the auto maintenance tasks only get resources when OTHER_GROUPS don’t need them you can simply change the plan directive for the DEFAULT_MAINTENANCE_PLAN

    As said you asked the advantages and disadvantages of this plan,
    Advantage: You have space advisor which gives you recommendation of segments if they are fregmented.You have Gather stats job, which is very crucial interms of performance, you might not have any custom scripts to gather stats so this job do it automatically.You have tunning advisory which tell you how to tune bad running queries. So all of these jobs are important and resource consuming.To make it run faster and efficient with enough resources, resource manager gives them priority under default maintenance group.
    Disadvantage: You have custom jobs which runs during default maintenance window time, then these can have negative impact on performance. As default jobs are very resource intensive and runs in high priority and left over resource is given to other group which is obviously impact your custom job.