PostgreSQL 源码解读(35)- 查询语句#20(查询优化-简化Having和GroupBy)

本节简单介绍了PG查询优化中对Having和Group By子句的简化处理。

一、基本概念

简化Having语句
把Having中的约束条件,如满足可以提升到Where条件中的,则移动到Where子句中,否则仍保留在Having语句中.这样做的目的是因为Having过滤在Group by之后执行,如能把Having中的过滤提升到Where中,则可以提前执行"选择"运算,减少Group by的开销.
以下语句,条件dwbh='1002'提升到Where中执行:

testdb=# explain verbose select a.dwbh,a.xb,count(*) 
testdb-# from t_grxx a 
testdb-# group by a.dwbh,a.xb
testdb-# having count(*) >= 1 and dwbh = '1002';
                                 QUERY PLAN                                  
-----------------------------------------------------------------------------
 GroupAggregate  (cost=15.01..15.06 rows=1 width=84)
   Output: dwbh, xb, count(*)
   Group Key: a.dwbh, a.xb
   Filter: (count(*) >= 1) -- count(*) >= 1 仍保留在Having中
   ->  Sort  (cost=15.01..15.02 rows=2 width=76)
         Output: dwbh, xb
         Sort Key: a.xb
         ->  Seq Scan on public.t_grxx a  (cost=0.00..15.00 rows=2 width=76)
               Output: dwbh, xb
               Filter: ((a.dwbh)::text = '1002'::text) -- 提升到Where中,扫描时过滤Tuple
(10 rows)

如存在Group by & Grouping sets则不作处理:

testdb=# explain verbose
testdb-# select a.dwbh,a.xb,count(*) 
testdb-# from t_grxx a 
testdb-# group by 
testdb-# grouping sets ((a.dwbh),(a.xb),())
testdb-# having count(*) >= 1 and dwbh = '1002'
testdb-# order by a.dwbh,a.xb;
                                  QUERY PLAN                                   
-------------------------------------------------------------------------------
 Sort  (cost=28.04..28.05 rows=3 width=84)
   Output: dwbh, xb, (count(*))
   Sort Key: a.dwbh, a.xb
   ->  MixedAggregate  (cost=0.00..28.02 rows=3 width=84)
         Output: dwbh, xb, count(*)
         Hash Key: a.dwbh
         Hash Key: a.xb
         Group Key: ()
         Filter: ((count(*) >= 1) AND ((a.dwbh)::text = '1002'::text)) -- 扫描数据表后再过滤
         ->  Seq Scan on public.t_grxx a  (cost=0.00..14.00 rows=400 width=76)
               Output: dwbh, grbh, xm, xb, nl
(11 rows)

简化Group by语句
如Group by中的字段列表已包含某个表主键的所有列,则该表在Group by语句中的其他列可以删除,这样的做法有利于提升在Group by过程中排序或Hash的性能,减少不必要的开销.

testdb=# explain verbose select a.dwbh,a.dwmc,count(*) 
testdb-# from t_dwxx a 
testdb-# group by a.dwbh,a.dwmc
testdb-# having count(*) >= 1;
                                QUERY PLAN                                
--------------------------------------------------------------------------
 HashAggregate  (cost=13.20..15.20 rows=53 width=264)
   Output: dwbh, dwmc, count(*)
   Group Key: a.dwbh, a.dwmc -- 分组键为dwbh & dwmc
   Filter: (count(*) >= 1)
   ->  Seq Scan on public.t_dwxx a  (cost=0.00..11.60 rows=160 width=256)
         Output: dwmc, dwbh, dwdz
(6 rows)

testdb=# alter table t_dwxx add primary key(dwbh); -- 添加主键
ALTER TABLE
testdb=# explain verbose select a.dwbh,a.dwmc,count(*) 
from t_dwxx a 
group by a.dwbh,a.dwmc
having count(*) >= 1;
                              QUERY PLAN                               
-----------------------------------------------------------------------
 HashAggregate  (cost=1.05..1.09 rows=1 width=264)
   Output: dwbh, dwmc, count(*)
   Group Key: a.dwbh -- 分组键只保留dwbh
   Filter: (count(*) >= 1)
   ->  Seq Scan on public.t_dwxx a  (cost=0.00..1.03 rows=3 width=256)
         Output: dwmc, dwbh, dwdz
(6 rows)

二、源码解读

相关处理的源码位于文件subquery_planner.c中,主函数为subquery_planner,代码片段如下:

     /*
      * In some cases we may want to transfer a HAVING clause into WHERE. We
      * cannot do so if the HAVING clause contains aggregates (obviously) or
      * volatile functions (since a HAVING clause is supposed to be executed
      * only once per group).  We also can't do this if there are any nonempty
      * grouping sets; moving such a clause into WHERE would potentially change
      * the results, if any referenced column isn't present in all the grouping
      * sets.  (If there are only empty grouping sets, then the HAVING clause
      * must be degenerate as discussed below.)
      *
      * Also, it may be that the clause is so expensive to execute that we're
      * better off doing it only once per group, despite the loss of
      * selectivity.  This is hard to estimate short of doing the entire
      * planning process twice, so we use a heuristic: clauses containing
      * subplans are left in HAVING.  Otherwise, we move or copy the HAVING
      * clause into WHERE, in hopes of eliminating tuples before aggregation
      * instead of after.
      *
      * If the query has explicit grouping then we can simply move such a
      * clause into WHERE; any group that fails the clause will not be in the
      * output because none of its tuples will reach the grouping or
      * aggregation stage.  Otherwise we must have a degenerate (variable-free)
      * HAVING clause, which we put in WHERE so that query_planner() can use it
      * in a gating Result node, but also keep in HAVING to ensure that we
      * don't emit a bogus aggregated row. (This could be done better, but it
      * seems not worth optimizing.)
      *
      * Note that both havingQual and parse->jointree->quals are in
      * implicitly-ANDed-list form at this point, even though they are declared
      * as Node *.
      */
     newHaving = NIL;
     foreach(l, (List *) parse->havingQual)//存在Having条件语句
     {
         Node       *havingclause = (Node *) lfirst(l);//获取谓词
 
         if ((parse->groupClause && parse->groupingSets) ||
             contain_agg_clause(havingclause) ||
             contain_volatile_functions(havingclause) ||
             contain_subplans(havingclause))
         {
             /* keep it in HAVING */
             //如果有Group&&Group Sets语句
             //保持不变
             newHaving = lappend(newHaving, havingclause);
         }
         else if (parse->groupClause && !parse->groupingSets)
         {
             /* move it to WHERE */
             //只有group语句,可以加入到jointree的条件中
             parse->jointree->quals = (Node *)
                 lappend((List *) parse->jointree->quals, havingclause);
         }
         else//既没有group也没有grouping set,拷贝一份到jointree的条件中
         {
             /* put a copy in WHERE, keep it in HAVING */
             parse->jointree->quals = (Node *)
                 lappend((List *) parse->jointree->quals,
                         copyObject(havingclause));
             newHaving = lappend(newHaving, havingclause);
         }
     }
     parse->havingQual = (Node *) newHaving;//调整having子句
 
     /* Remove any redundant GROUP BY columns */
     remove_useless_groupby_columns(root);//去掉group by中无用的数据列

remove_useless_groupby_columns

 /*
  * remove_useless_groupby_columns
  *      Remove any columns in the GROUP BY clause that are redundant due to
  *      being functionally dependent on other GROUP BY columns.
  *
  * Since some other DBMSes do not allow references to ungrouped columns, it's
  * not unusual to find all columns listed in GROUP BY even though listing the
  * primary-key columns would be sufficient.  Deleting such excess columns
  * avoids redundant sorting work, so it's worth doing.  When we do this, we
  * must mark the plan as dependent on the pkey constraint (compare the
  * parser's check_ungrouped_columns() and check_functional_grouping()).
  *
  * In principle, we could treat any NOT-NULL columns appearing in a UNIQUE
  * index as the determining columns.  But as with check_functional_grouping(),
  * there's currently no way to represent dependency on a NOT NULL constraint,
  * so we consider only the pkey for now.
  */
 static void
 remove_useless_groupby_columns(PlannerInfo *root)
 {
     Query      *parse = root->parse;//查询树
     Bitmapset **groupbyattnos;//位图集合
     Bitmapset **surplusvars;//位图集合
     ListCell   *lc;
     int         relid;
 
     /* No chance to do anything if there are less than two GROUP BY items */
     if (list_length(parse->groupClause) < 2)//如果只有1个ITEMS,无需处理
         return;
 
     /* Don't fiddle with the GROUP BY clause if the query has grouping sets */
     if (parse->groupingSets)//存在Grouping sets,不作处理
         return;
 
     /*
      * Scan the GROUP BY clause to find GROUP BY items that are simple Vars.
      * Fill groupbyattnos[k] with a bitmapset of the column attnos of RTE k
      * that are GROUP BY items.
      */
     //用于分组的属性 
     groupbyattnos = (Bitmapset **) palloc0(sizeof(Bitmapset *) *
                                            (list_length(parse->rtable) + 1));
     foreach(lc, parse->groupClause)
     {
         SortGroupClause *sgc = lfirst_node(SortGroupClause, lc);
         TargetEntry *tle = get_sortgroupclause_tle(sgc, parse->targetList);
         Var        *var = (Var *) tle->expr;
 
         /*
          * Ignore non-Vars and Vars from other query levels.
          *
          * XXX in principle, stable expressions containing Vars could also be
          * removed, if all the Vars are functionally dependent on other GROUP
          * BY items.  But it's not clear that such cases occur often enough to
          * be worth troubling over.
          */
         if (!IsA(var, Var) ||
             var->varlevelsup > 0)
             continue;
 
         /* OK, remember we have this Var */
         relid = var->varno;
         Assert(relid <= list_length(parse->rtable));
         groupbyattnos[relid] = bms_add_member(groupbyattnos[relid],
                                               var->varattno - FirstLowInvalidHeapAttributeNumber);
     }
 
     /*
      * Consider each relation and see if it is possible to remove some of its
      * Vars from GROUP BY.  For simplicity and speed, we do the actual removal
      * in a separate pass.  Here, we just fill surplusvars[k] with a bitmapset
      * of the column attnos of RTE k that are removable GROUP BY items.
      */
     surplusvars = NULL;         /* don't allocate array unless required */
     relid = 0;
     //如某个Relation的分组键中已含主键列,去掉其他列
     foreach(lc, parse->rtable)
     {
         RangeTblEntry *rte = lfirst_node(RangeTblEntry, lc);
         Bitmapset  *relattnos;
         Bitmapset  *pkattnos;
         Oid         constraintOid;
 
         relid++;
 
         /* Only plain relations could have primary-key constraints */
         if (rte->rtekind != RTE_RELATION)
             continue;
 
         /* Nothing to do unless this rel has multiple Vars in GROUP BY */
         relattnos = groupbyattnos[relid];
         if (bms_membership(relattnos) != BMS_MULTIPLE)
             continue;
 
         /*
          * Can't remove any columns for this rel if there is no suitable
          * (i.e., nondeferrable) primary key constraint.
          */
         pkattnos = get_primary_key_attnos(rte->relid, false, &constraintOid);
         if (pkattnos == NULL)
             continue;
 
         /*
          * If the primary key is a proper subset of relattnos then we have
          * some items in the GROUP BY that can be removed.
          */
         if (bms_subset_compare(pkattnos, relattnos) == BMS_SUBSET1)
         {
             /*
              * To easily remember whether we've found anything to do, we don't
              * allocate the surplusvars[] array until we find something.
              */
             if (surplusvars == NULL)
                 surplusvars = (Bitmapset **) palloc0(sizeof(Bitmapset *) *
                                                      (list_length(parse->rtable) + 1));
 
             /* Remember the attnos of the removable columns */
             surplusvars[relid] = bms_difference(relattnos, pkattnos);
 
             /* Also, mark the resulting plan as dependent on this constraint */
             parse->constraintDeps = lappend_oid(parse->constraintDeps,
                                                 constraintOid);
         }
     }
 
     /*
      * If we found any surplus Vars, build a new GROUP BY clause without them.
      * (Note: this may leave some TLEs with unreferenced ressortgroupref
      * markings, but that's harmless.)
      */
     if (surplusvars != NULL)
     {
         List       *new_groupby = NIL;
 
         foreach(lc, parse->groupClause)
         {
             SortGroupClause *sgc = lfirst_node(SortGroupClause, lc);
             TargetEntry *tle = get_sortgroupclause_tle(sgc, parse->targetList);
             Var        *var = (Var *) tle->expr;
 
             /*
              * New list must include non-Vars, outer Vars, and anything not
              * marked as surplus.
              */
             if (!IsA(var, Var) ||
                 var->varlevelsup > 0 ||
                 !bms_is_member(var->varattno - FirstLowInvalidHeapAttributeNumber,
                                surplusvars[var->varno]))
                 new_groupby = lappend(new_groupby, sgc);
         }
 
         parse->groupClause = new_groupby;
     }
 }

三、参考资料

planner.c

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,242评论 5 459
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,769评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,484评论 0 319
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,133评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,007评论 4 355
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,080评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,496评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,190评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,464评论 1 290
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,549评论 2 309
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,330评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,205评论 3 312
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,567评论 3 298
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,889评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,160评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,475评论 2 341
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,650评论 2 335

推荐阅读更多精彩内容