Guest User

Untitled

a guest
Jan 16th, 2017
102
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 7.11 KB | None | 0 0
  1. Types of index scans
  2. ====================
  3.  
  4. Indexes
  5. =======
  6.  
  7. Sequential Scan:
  8. ----------------
  9. * Read every row in the table
  10. * No reading of index. Reading from indexes is also expensive.
  11. * Best way to access small tables, that fits in a single disk block. There is no point in this case to use the index. Block size in pg is 8K. Also the best way to read access all rows in a table.
  12.  
  13. Index Scan:
  14. -----------
  15. * Index Scan or Bitmap Index Scan: Read the index, hoping that we won't have to go to disk. Index vs. Bitmap is not about what kind of index we're using. It's actually two different ways of using the index. Index scan reads the index in alternation, bouncing between table and index, row at a time. The bitmap index will read all the relevant info from the index first, then read from the table based on what the index tells us.
  16. * Performance gain when reading a few rows from a big table. It goes to the index, finds the rows
  17. * Only method of accessing a table the can return rows in sorted order - useful in combination with LIMIT.
  18. * Disadvantage: Causes random I/O against the base table. Read a row from the index, then a row from the table, and so on.
  19.  
  20. Bitmap Index Scan:
  21. ------------------
  22. * Scans all index rows before examining base table. This populates a TID bitmap.
  23. * Table I/O is sequential, with skips.; results in physical order.
  24. * Can efficiently combine data from multiple indeces. the TID bitmap can handle boolean ANDs and ORs.
  25. * Handles LIMIT poorly.
  26. * Adjust page cost parameters to use Index scans vs Bitamp scans on queries where it makes sense.
  27.  
  28. SMALL INDEXES ARE BETTER. Less columns are better. Avoid multicolumn indexes.
  29.  
  30. Join planning
  31. =============
  32. * Fixing the join order and join strategy is the harder part of query planning.
  33. * When search space is small, planner does a nearly exhaustive search. When search space is too large, planner uses GEQO (genetic planner) to limit planning time and mem usage.
  34.  
  35. Join Strategies
  36. ---------------
  37. Nested loop
  38. -----------
  39.  
  40. for (each outer tuple)
  41. for (each inner tuple)
  42. if (join condition is met)
  43. emit result row;
  44.  
  45. For a 3 table join, a, b and c, reduce problem to a two table join: (a && b) && c, or a && (b && c), or ...
  46.  
  47. Merge join
  48. ----------
  49. * Only handles equality joins
  50. * Puts both input relations in sorted order (sort or index scan), and match up on equal values by scanning through the two in parallel.
  51. * Only way to handle really big datasets, however merge joins are slower.
  52.  
  53. Hash join
  54. ---------
  55. * Also only equality joins.
  56. * Hash each row from the inner relation to ceate a hash table. Then hash the outer relation and probe the hash table for matches.
  57. * Very fast - but requires enough memory to store inner tuples. Can get around this using multiple "batches". If work_mem is 1MB, and the hash requires 2MB. The planner does this.
  58. * Not guaranteed to retain input ordering. This is because the planner is capable of doing the hash join in multiple batches. So when combined with a nested loop there may be a higher cost.
  59.  
  60. Join Removal
  61. ------------
  62. * It can detect that a LEFT OUTER join is not used and remove it entirely. Ex: SELECT foo.a, foo.b from foo LEFT OUTER JOIN bar on foo.a = bar.a
  63. * For example, big view where you request a subset of attributes.
  64.  
  65. Join Reordering
  66. ---------------
  67. * It doesn't matter what the order in which you write the JOINs, postgres will decide the best plan. Available for ages...
  68. * Can't reorder in the presence of LEFT JOINs.
  69. * It will reorder joins as much as it can, but will be constrained by left joins.
  70.  
  71. These two queries are different and can't be reordered:
  72.  
  73. SELECT * FROM
  74. (foo JOIN bar ON foo.x = bar.x)
  75. LEFT JOIN baz ON foo.y = baz.y
  76.  
  77. SELECT * FROM
  78. (foo LEFT JOIN baz ON foo.y = baz.y)
  79. JOIN bar ON foo.x = bar.x
  80.  
  81. EXPLAIN estimates
  82. -----------------
  83. * number of rows estimates
  84. * average width (byte) of each row (not that important)
  85. * cost is in "abstract cost units". Ex: Relative costs of an index fetch vs a sequential page seek. These are not miliseconds.
  86. * First cost is startup, second number is total
  87. * Actual time is miliseconds, provided by EXPLAIN ANALYZE
  88. * When looking at EXPLAIN ANALYZE output, look for:
  89. * where is the most time spent? In a bad query plan, find which branch was most costly? Identify which table is more problematic? Add an index?
  90. * Are the estimates and actuals very different? Have you analyzed the table recently. But sometimes these problems are caused by correlation between columns, and may require redesigning schema or the query. This is a rare/complex case though.
  91. * It's useful to remove JOINs or conditions that don't matter to the problem. Like remove a 5 table join which can be boiled down to a 2 table join. And if you are able to reproduce the problem with the 2 table join query, it'll be much simpler to troubleshoot.
  92.  
  93. Aggregates and DISTINCT
  94. =======================
  95.  
  96. Plain aggregate
  97. ---------------
  98. * Aggregating everything into one group, producing one row. SELECT count(*) from foo where x = 2;
  99. * Gets more complex when there's a group by and there are many rows in the result. In that case, sorted and hashed aggregates.
  100.  
  101. Sorted aggregate
  102. ----------------
  103. * Sort the data, reads through it, when you see a new value, aggregate the prior group.
  104.  
  105. Hashed aggregate
  106. ----------------
  107. * Insert each input row into a hash table based on the gruoping columns. At the end, aggregate all the groups.
  108. * Fast.
  109.  
  110. Statistics
  111. ==========
  112. * Seq scan vs. index scan vs. bitmap index scan. Nested loop vs. merge join vs. hash join?
  113. * ANALYZE gathers all stats, and decides. With bad statistics, the plans will suck.
  114.  
  115. Confusing the planner
  116. ---------------------
  117. * Examples of queries where the planner is not able to estimate properly, no matter how good the stats, etc.
  118. * Planner may underestimate row count, choosing index scan over a sequential scan, or nested loop instead of a hash or merge join.
  119. * Planner overestimates row count, so it may choose a seq scan instead of an index scan, or a merge or hash join instead of a nested loop.
  120. * Small values for LIMIT tilt the planner toward plans that have a fast start and magnify the effect of bad estimates. You can hack this out
  121.  
  122. SELECT * FROM foo where a= 1 AND b = 1
  123.  
  124. * Impossible to know the number of rows that will match both of these criteria. The planner is affected by this.
  125.  
  126. SELECT * FROM foo WHERE (a + 0) = a
  127.  
  128. * Planner can't know, will assume 0.5% of rows will match.
  129.  
  130. Query planner params
  131. --------------------
  132.  
  133. * seq_page_cost (1.0), random_page_cost (4.0) - reduce these if DB is largely cashed.
  134. * default_statistics_target (100) - Level of detail of stats gathering. Can be overridden on specific columns.
  135. * enable_hashjoin, enable_sort, etc. Useful for testing queries and plans. Don't use these in production. If you must, use SET to disable them for the one session/query.
  136. * work_mem - Amount of memory per sort or hash. Each query can use many sorts or hashes.
  137. * from_collapse_limit, join_collapse_limit, geqo_threshold. Can be reaised, but some queries can blow up when this is increased. Careful.
  138.  
  139. Things that are slow
  140. --------------------
  141. * DISTINCT
  142. * Pl/pgsql loops.
  143. * Repeated calls to SQL or PL/pgsql functions. SELECT id, some_function(id) FROM table; <- bad.
Add Comment
Please, Sign In to add comment