-
Notifications
You must be signed in to change notification settings - Fork 5k
Comparing changes
Open a pull request
base repository: postgres/postgres
base: master@{1day}
head repository: postgres/postgres
compare: master
- 17 commits
- 56 files changed
- 13 contributors
Commits on Jul 1, 2025
-
Show sizes of FETCH queries as constants in pg_stat_statements
Prior to this patch, every FETCH call would generate a unique queryId with a different size specified. Depending on the workloads, this could lead to a significant bloat in pg_stat_statements, as repeatedly calling a specific cursor would result in a new queryId each time. For example, FETCH 1 c1; and FETCH 2 c1; would produce different queryIds. This patch improves the situation by normalizing the fetch size, so as semantically similar statements generate the same queryId. As a result, statements like the below, which differ syntactically but have the same effect, will now share a single queryId: FETCH FROM c1 FETCH NEXT c1 FETCH 1 c1 In order to do a normalization based on the keyword used in FETCH, FetchStmt is tweaked with a new FetchDirectionKeywords. This matters for "howMany", which could be set to a negative value depending on the direction, and we want to normalize the queries with enough information about the direction keywords provided, including RELATIVE, ABSOLUTE or all the ALL variants. Author: Sami Imseih <samimseih@gmail.com> Discussion: https://postgr.es/m/CAA5RZ0tA6LbHCg2qSS+KuM850BZC_+ZgHV7Ug6BXw22TNyF+MA@mail.gmail.com
Configuration menu - View commit details
-
Copy full SHA for bee23ea - Browse repository at this point
Copy the full SHA bee23eaView commit details
Commits on Jul 2, 2025
-
Move code for the bytea data type from varlena.c to new bytea.c
This commit moves all the routines related to the bytea data type into its own new file, called bytea.c, clearing some of the bloat in varlena.c. This includes the routines for: - Input, output, receive and send - Comparison - Casts to integer types - bytea-specific functions The internals of the routines moved here are unchanged, with one exception. This comes with a twist in bytea_string_agg_transfn(), where the call to makeStringAggState() is replaced by the internals of this routine, still located in varlena.c. This simplifies the move to the new file by not having to expose makeStringAggState(). Author: Aleksander Alekseev <aleksander@timescale.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://postgr.es/m/CAJ7c6TMPVPJ5DL447zDz5ydctB8OmuviURtSwd=PHCRFEPDEAQ@mail.gmail.com
Configuration menu - View commit details
-
Copy full SHA for b45242f - Browse repository at this point
Copy the full SHA b45242fView commit details -
Fix bug in archive streamer with LZ4 decompression
When decompressing some input data, the calculation for the initial starting point and the initial size were incorrect, potentially leading to failures when decompressing contents with LZ4. These initialization points are fixed in this commit, bringing the logic closer to what exists for gzip and zstd. The contents of the compressed data is clear (for example backups taken with LZ4 can still be decompressed with a "lz4" command), only the decompression part reading the input data was impacted by this issue. This code path impacts pg_basebackup and pg_verifybackup, which can use the LZ4 decompression routines with an archive streamer, or any tools that try to use the archive streamers in src/fe_utils/. The issue is easier to reproduce with files that have a low-compression rate, like ones filled with random data, for a size of at least 512kB, but this could happen with anything as long as it is stored in a data folder. Some tests are added based on this idea, with a file filled with random bytes grabbed from the backend, written at the root of the data folder. This is proving good enough to reproduce the original problem. Author: Mikhail Gribkov <youzhick@gmail.com> Discussion: https://postgr.es/m/CAMEv5_uQS1Hg6KCaEP2JkrTBbZ-nXQhxomWrhYQvbdzR-zy-wA@mail.gmail.com Backpatch-through: 15
Configuration menu - View commit details
-
Copy full SHA for 3369a3b - Browse repository at this point
Copy the full SHA 3369a3bView commit details -
Remove implicit cast from 'void *'
Commit e2809e3 added code to a header which assigns a pointer to void to a pointer to unsigned char. This causes build errors for extensions written in C++. Fix by adding an explicit cast. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CANWCAZaCq9AHBuhs%3DMx7Gg_0Af9oRU7iAqr0itJCtfmsWwVmnQ%40mail.gmail.com Backpatch-through: 18
Configuration menu - View commit details
-
Copy full SHA for 9adb58a - Browse repository at this point
Copy the full SHA 9adb58aView commit details -
Fix missing FSM vacuum opportunities on tables without indexes.
Commit c120550 optimized the vacuuming of relations without indexes (a.k.a. one-pass strategy) by directly marking dead item IDs as LP_UNUSED. However, the periodic FSM vacuum was still checking if dead item IDs had been marked as LP_DEAD when attempting to vacuum the FSM every VACUUM_FSM_EVERY_PAGES blocks. This condition was never met due to the optimization, resulting in missed FSM vacuum opportunities. This commit modifies the periodic FSM vacuum condition to use the number of tuples deleted during HOT pruning. This count includes items marked as either LP_UNUSED or LP_REDIRECT, both of which are expected to result in new free space to report. Back-patch to v17 where the vacuum optimization for tables with no indexes was introduced. Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/CAD21AoBL8m6B9GSzQfYxVaEgvD7-Kr3AJaS-hJPHC+avm-29zw@mail.gmail.com Backpatch-through: 17
Configuration menu - View commit details
-
Copy full SHA for 3811ca3 - Browse repository at this point
Copy the full SHA 3811ca3View commit details -
Use per-field comments for IndexInfo, instead of one big header comment listing all the fields. This makes the relevant comments easier to find, and it will also make it less likely that comments are not updated when fields are added or removed, as has happened in the past. Author: Japin Li <japinli@hotmail.com> Discussion: https://www.postgresql.org/message-id/flat/ME0P300MB04453E6C7EA635F0ECF41BFCB6832%40ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
Configuration menu - View commit details
-
Copy full SHA for de5aa15 - Browse repository at this point
Copy the full SHA de5aa15View commit details -
meson: Increase minimum version to 0.57.2
The previous minimum was to maintain support for Python 3.5, but we now require Python 3.6 anyway (commit 45363fc), so that reason is obsolete. A small raise to Meson 0.57 allows getting rid of a fair amount of version conditionals and silences some future-deprecated warnings. With the version bump, the following deprecation warnings appeared and are fixed: WARNING: Project targets '>=0.57' but uses feature deprecated since '0.55.0': ExternalProgram.path. use ExternalProgram.full_path() instead WARNING: Project targets '>=0.57' but uses feature deprecated since '0.56.0': meson.build_root. use meson.project_build_root() or meson.global_build_root() instead. It turns out that meson 0.57.0 and 0.57.1 are buggy for our use, so the minimum is actually set to 0.57.2. This is specific to this version series; in the future we won't necessarily need to be this precise. Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://www.postgresql.org/message-id/flat/42e13eb0-862a-441e-8d84-4f0fd5f6def0%40eisentraut.org
Configuration menu - View commit details
-
Copy full SHA for f039c22 - Browse repository at this point
Copy the full SHA f039c22View commit details -
doc: pg_buffercache documentation wordsmithing
A words seemed to have gone missing in the leading paragraphs. Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Co-authored-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/aGTQYZz9L0bjlzVL@ip-10-97-1-34.eu-west-3.compute.internal Backpatch-through: 18
Configuration menu - View commit details
-
Copy full SHA for 8eede2c - Browse repository at this point
Copy the full SHA 8eede2cView commit details -
Make handling of redundant nbtree keys more robust.
nbtree preprocessing's handling of redundant (and contradictory) keys created problems for scans with = arrays. It was just about possible for a scan with an = array key and one or more redundant keys (keys that preprocessing could not eliminate due an incomplete opfamily and a cross-type key) to get stuck. Testing has shown that infinite cycling where the scan never manages to make forward progress was possible. This could happen when the scan's arrays were reset in _bt_readpage's forcenonrequired=true path (added by bugfix commit 5f4d98d) when the arrays weren't at least advanced up to the same point that they were in at the start of the _bt_readpage call. Earlier redundant keys prevented the finaltup call to _bt_advance_array_keys from reaching lower-order keys that needed to be used to sufficiently advance the scan's arrays. To fix, make preprocessing leave the scan's keys in a state that is as close as possible to how it'll usually leave them (in the common case where there's no redundant keys that preprocessing failed to eliminate). Now nbtree preprocessing _reliably_ leaves behind at most one required >/>= key per index column, and at most one required </<= key per index column. Columns that have one or more = keys that are eligible to be marked required (based on the traditional rules) prioritize the = keys over redundant inequality keys; they'll _reliably_ be left with only one of the = keys as the index column's only required key. Keys that are not marked required (whether due to the new preprocessing step running or for some other reason) are relocated to the end of the so->keyData[] array as needed. That way they'll always be evaluated after the scan's required keys, and so cannot prevent code in places like _bt_advance_array_keys and _bt_first from reaching a required key. Also teach _bt_first to decide which initial positioning keys to use based on the same requiredness markings that have long been used by _bt_checkkeys/_bt_advance_array_keys. This is a necessary condition for reliably avoiding infinite cycling. _bt_advance_array_keys expects to be able to reason about what'll happen in the next _bt_first call should it start another primitive index scan, by evaluating inequality keys that were marked required in the opposite-to-scan scan direction only. Now everybody (_bt_first, _bt_checkkeys, and _bt_advance_array_keys) will always agree on which exact key will be used on each index column to start and/or end the scan (except when row compare keys are involved, which have similar problems not addressed by this commit). An upcoming commit will finish off the work started by this commit by harmonizing how _bt_first, _bt_checkkeys, and _bt_advance_array_keys apply row compare keys to start and end scans. This fixes what was arguably an oversight in either commit 5f4d98d or commit 8a51027. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Heikki Linnakangas <heikki.linnakangas@iki.fi> Discussion: https://postgr.es/m/CAH2-Wz=ds4M+3NXMgwxYxqU8MULaLf696_v5g=9WNmWL2=Uo2A@mail.gmail.com Backpatch-through: 18
Configuration menu - View commit details
-
Copy full SHA for f09816a - Browse repository at this point
Copy the full SHA f09816aView commit details -
Make row compares robust during nbtree array scans.
Recent nbtree bugfix commit 5f4d98d added a special case to the code that sets up a page-level prefix of keys that are definitely satisfied by every tuple on the page: whenever _bt_set_startikey reached a row compare key, we'd refuse to apply the pstate.forcenonrequired behavior in scans where that usually happens (scans with a higher-order array key). That hack made the scan avoid essentially the same infinite cycling behavior that also affected nbtree scans with redundant keys (keys that preprocessing could not eliminate) prior to commit f09816a. There are now serious doubts about this row compare workaround. Testing has shown that a scan with a row compare key and an array key could still read the same leaf page twice (without the scan's direction changing), which isn't supposed to be possible following the SAOP enhancements added by Postgres 17 commit 5bf748b. Also, we still allowed a required row compare key to be used with forcenonrequired mode when its header key happened to be beyond the pstate.ikey set by _bt_set_startikey, which was complicated and brittle. The underlying problem was that row compares had inconsistent rules around how scans start (which keys can be used for initial positioning purposes) and how scans end (which keys can set continuescan=false). Quals with redundant keys that could not be eliminated by preprocessing also had that same quality to them prior to today's bugfix f09816a. It now seems prudent to bring row compare keys in line with the new charter for required keys, by making the start and end rules symmetric. This commit fixes two points of disagreement between _bt_first and _bt_check_rowcompare. Firstly, _bt_check_rowcompare was capable of ending the scan at the point where it needed to compare an ISNULL-marked row compare member that came immediately after a required row compare member. _bt_first now has symmetric handling for NULL row compares. Secondly, _bt_first had its own ideas about which keys were safe to use for initial positioning purposes. It could use fewer or more keys than _bt_check_rowcompare. _bt_first now uses the same requiredness markings as _bt_check_rowcompare for this. Now that _bt_first and _bt_check_rowcompare agree on how to start and end scans, we can get rid of the forcenonrequired special case, without any risk of infinite cycling. This approach also makes row compare keys behave more like regular scalar keys, particularly within _bt_first. Fixing these inconsistencies necessitates dealing with a related issue with the way that row compares were marked required by preprocessing: we didn't mark any lower-order row members required following 2016 bugfix commit a298a1e. That approach was over broad. The bug in question was actually an oversight in how _bt_check_rowcompare dealt with tuple NULL values that failed to satisfy a scan key marked required in the opposite scan direction (it was a bug in 2011 commits 6980f81 and 882368e, not a bug in 2006 commit 3a0a16c). Go back to marking row compare members as required using the original 2006 rules, and fix the 2016 bug in a more principled way: by limiting use of the "set continuescan=false with a key required in the opposite scan direction upon encountering a NULL tuple value" optimization to the first/most significant row member key. While it isn't safe to use an implied IS NOT NULL qualifier to end the scan when it comes from a required lower-order row compare member key, it _is_ generally safe for such a required member key to end the scan -- provided the key is marked required in the _current_ scan direction. This fixes what was arguably an oversight in either commit 5f4d98d or commit 8a51027. It is a direct follow-up to today's commit f09816a. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Heikki Linnakangas <heikki.linnakangas@iki.fi> Discussion: https://postgr.es/m/CAH2-Wz=pcijHL_mA0_TJ5LiTB28QpQ0cGtT-ccFV=KzuunNDDQ@mail.gmail.com Backpatch-through: 18
Configuration menu - View commit details
-
Copy full SHA for bd3f59f - Browse repository at this point
Copy the full SHA bd3f59fView commit details -
Fix error message for ALTER CONSTRAINT ... NOT VALID
Trying to alter a constraint so that it becomes NOT VALID results in an error that assumes the constraint is a foreign key. This is potentially wrong, so give a more generic error message. While at it, give CREATE CONSTRAINT TRIGGER a better error message as well. Co-authored-by: jian he <jian.universality@gmail.com> Co-authored-by: Fujii Masao <masao.fujii@oss.nttdata.com> Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de> Co-authored-by: Amul Sul <sulamul@gmail.com> Discussion: https://postgr.es/m/CACJufxHSp2puxP=q8ZtUGL1F+heapnzqFBZy5ZNGUjUgwjBqTQ@mail.gmail.com
Configuration menu - View commit details
-
Copy full SHA for c989aff - Browse repository at this point
Copy the full SHA c989affView commit details -
Allow width_bucket()'s "operand" input to be NaN.
The array-based variant of width_bucket() has always accepted NaN inputs, treating them as equal but larger than any non-NaN, as we do in ordinary comparisons. But up to now, the four-argument variants threw errors for a NaN operand. This is inconsistent and unnecessary, since we can perfectly well regard NaN as falling after the last bucket. We do still throw error for NaN or infinity histogram-bound inputs, since there's no way to compute sensible bucket boundaries. Arguably this is a bug fix, but given the lack of field complaints I'm content to fix it in master. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com> Discussion: https://postgr.es/m/2822872.1750540911@sss.pgh.pa.us
Configuration menu - View commit details
-
Copy full SHA for 7374b3a - Browse repository at this point
Copy the full SHA 7374b3aView commit details -
Update obsolete row compare preprocessing comments.
Restore nbtree preprocessing comments describing how we mark nbtree row compare members required to how they were prior to 2016 bugfix commit a298a1e. Oversight in commit bd3f59f, which made nbtree preprocessing revert to the original 2006 rules, but neglected to revert these comments. Backpatch-through: 18
Configuration menu - View commit details
-
Copy full SHA for 9ca30a0 - Browse repository at this point
Copy the full SHA 9ca30a0View commit details -
Add GetNamedDSA() and GetNamedDSHash().
Presently, the dynamic shared memory (DSM) registry only provides GetNamedDSMSegment(), which allocates a fixed-size segment. To use the DSM registry for more sophisticated things like dynamic shared memory areas (DSAs) or a hash table backed by a DSA (dshash), users need to create a DSM segment that stores various handles and LWLock tranche IDs and to write fairly complicated initialization code. Furthermore, there is likely little variation in this initialization code between libraries. This commit introduces functions that simplify allocating a DSA or dshash within the DSM registry. These functions are very similar to GetNamedDSMSegment(). Notable differences include the lack of an initialization callback parameter and the prohibition of calling the functions more than once for a given entry in each backend (which should be trivially avoidable in most circumstances). While at it, this commit bumps the maximum DSM registry entry name length from 63 bytes to 127 bytes. Also note that even though one could presumably detach/destroy the DSAs and dshashes created in the registry, such use-cases are not yet well-supported, if for no other reason than the associated DSM registry entries cannot be removed. Adding such support is left as a future exercise. The test_dsm_registry test module contains tests for the new functions and also serves as a complete usage example. Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> Reviewed-by: Sami Imseih <samimseih@gmail.com> Reviewed-by: Florents Tselai <florents.tselai@gmail.com> Reviewed-by: Rahila Syed <rahilasyed90@gmail.com> Discussion: https://postgr.es/m/aEC8HGy2tRQjZg_8%40nathan
Configuration menu - View commit details
-
Copy full SHA for fe07100 - Browse repository at this point
Copy the full SHA fe07100View commit details -
Make more use of RELATION_IS_OTHER_TEMP().
A few places were open-coding it instead of using this handy macro. Author: Junwang Zhao <zhjwpku@gmail.com> Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Discussion: https://postgr.es/m/CAEG8a3LjTGJcOcxQx-SUOGoxstG4XuCWLH0ATJKKt_aBTE5K8w%40mail.gmail.com
Configuration menu - View commit details
-
Copy full SHA for bb10938 - Browse repository at this point
Copy the full SHA bb10938View commit details -
Fix cross-version upgrade test breakage from commit fe07100.
In commit fe07100, I renamed a couple of functions in test_dsm_registry to make it clear what they are testing. However, the buildfarm's cross-version upgrade tests run pg_upgrade with the test modules installed, so this caused errors like: ERROR: could not find function "get_val_in_shmem" in file ".../test_dsm_registry.so" To fix, revert those renames. I could probably get away with only un-renaming the C symbols, but I figured I'd avoid introducing function name mismatches. Also, AFAICT the buildfarm's cross-version upgrade tests do not run the test module tests post-upgrade, else we'll need to properly version the extension. Per buildfarm member crake. Discussion: https://postgr.es/m/aGVuYUNW23tStUYs%40nathan
Configuration menu - View commit details
-
Copy full SHA for 0c2b717 - Browse repository at this point
Copy the full SHA 0c2b717View commit details -
Correctly copy the target host identification in PQcancelCreate.
PQcancelCreate failed to copy struct pg_conn_host's "type" field, instead leaving it zero (a/k/a CHT_HOST_NAME). This seemingly has no great ill effects if it should have been CHT_UNIX_SOCKET instead, but if it should have been CHT_HOST_ADDRESS then a null-pointer dereference will occur when the cancelConn is used. Bug: #18974 Reported-by: Maxim Boguk <maxim.boguk@gmail.com> Author: Sergei Kornilov <sk@zsrv.org> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/18974-575f02b2168b36b3@postgresql.org Backpatch-through: 17
Configuration menu - View commit details
-
Copy full SHA for fe05430 - Browse repository at this point
Copy the full SHA fe05430View commit details
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff master@{1day}...master