# Checking port 52988 # Found port 52988 Name: primary Data directory: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_primary_data/pgdata Backup directory: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_primary_data/backup Archive directory: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_primary_data/archives Connection string: port=52988 host=/tmp/GXpacC8XDJ Log file: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/log/027_stream_regress_primary.log [19:39:10.677](0.031s) # initializing database system by copying initdb template # Running: cp -RPp /tmp/cirrus-ci-build/build/tmp_install/initdb-template /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_primary_data/pgdata # Running: /tmp/cirrus-ci-build/build/src/test/regress/pg_regress --config-auth /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_primary_data/pgdata ### Starting node "primary" # Running: pg_ctl -w -D /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_primary_data/pgdata -l /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/log/027_stream_regress_primary.log -o --cluster-name=primary start waiting for server to start.... done server started # Postmaster PID for node "primary" is 27653 (standby_1,) [19:39:11.013](0.336s) ok 1 - physical slot created on primary # Taking pg_basebackup my_backup from node "primary" # Running: pg_basebackup -D /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_primary_data/backup/my_backup -h /tmp/GXpacC8XDJ -p 52988 --checkpoint fast --no-sync # Backup finished # Checking port 52989 # Found port 52989 Name: standby_1 Data directory: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_standby_1_data/pgdata Backup directory: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_standby_1_data/backup Archive directory: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_standby_1_data/archives Connection string: port=52989 host=/tmp/GXpacC8XDJ Log file: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/log/027_stream_regress_standby_1.log # Initializing node "standby_1" from backup "my_backup" of node "primary" ### Enabling streaming replication for node "standby_1" ### Starting node "standby_1" # Running: pg_ctl -w -D /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_standby_1_data/pgdata -l /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/log/027_stream_regress_standby_1.log -o --cluster-name=standby_1 start waiting for server to start.... done server started # Postmaster PID for node "standby_1" is 27776 # using postmaster on /tmp/GXpacC8XDJ, port 52988 ok 1 - test_setup 895 ms # parallel group (20 tests): pg_lsn txid varchar char text int2 oid int4 name regproc uuid float4 money float8 int8 boolean enum bit rangetypes numeric ok 2 + boolean 272 ms ok 3 + char 127 ms ok 4 + name 175 ms ok 5 + varchar 122 ms ok 6 + text 128 ms ok 7 + int2 133 ms ok 8 + int4 147 ms ok 9 + int8 215 ms ok 10 + oid 134 ms ok 11 + float4 182 ms ok 12 + float8 212 ms ok 13 + bit 358 ms ok 14 + numeric 1463 ms ok 15 + txid 106 ms ok 16 + uuid 181 ms ok 17 + enum 303 ms ok 18 + money 190 ms ok 19 + rangetypes 1292 ms ok 20 + pg_lsn 97 ms ok 21 + regproc 178 ms # parallel group (20 tests): lseg circle macaddr path line md5 time macaddr8 numerology timetz date inet point timestamp strings interval timestamptz polygon multirangetypes box ok 22 + strings 545 ms ok 23 + md5 148 ms ok 24 + numerology 219 ms ok 25 + point 424 ms ok 26 + lseg 59 ms ok 27 + line 140 ms ok 28 + box 1404 ms ok 29 + path 110 ms ok 30 + polygon 1055 ms ok 31 + circle 93 ms ok 32 + date 273 ms ok 33 + time 213 ms ok 34 + timetz 223 ms ok 35 + timestamp 520 ms ok 36 + timestamptz 692 ms ok 37 + interval 592 ms ok 38 + inet 297 ms ok 39 + macaddr 99 ms ok 40 + macaddr8 214 ms ok 41 + multirangetypes 1227 ms # parallel group (12 tests): unicode misc_sanity xid comments expressions tstypes type_sanity mvcc geometry horology regex opr_sanity ok 42 + geometry 476 ms ok 43 + horology 476 ms ok 44 + tstypes 211 ms ok 45 + regex 923 ms ok 46 + type_sanity 232 ms ok 47 + opr_sanity 1026 ms ok 48 + misc_sanity 96 ms ok 49 + comments 104 ms ok 50 + expressions 209 ms ok 51 + unicode 39 ms ok 52 + xid 103 ms ok 53 + mvcc 308 ms # parallel group (5 tests): copyselect copydml copy insert_conflict insert ok 54 + copy 276 ms ok 55 + copyselect 37 ms ok 56 + copydml 65 ms ok 57 + insert 889 ms ok 58 + insert_conflict 377 ms # parallel group (7 tests): create_function_c create_operator create_type create_procedure create_schema create_misc create_table ok 59 + create_function_c 66 ms ok 60 + create_misc 193 ms ok 61 + create_operator 133 ms ok 62 + create_procedure 180 ms ok 63 + create_table 1465 ms ok 64 + create_type 155 ms ok 65 + create_schema 182 ms # parallel group (5 tests): index_including index_including_gist create_view create_index_spgist create_index ok 66 + create_index 2950 ms ok 67 + create_index_spgist 2548 ms ok 68 + create_view 1512 ms ok 69 + index_including 617 ms ok 70 + index_including_gist 1066 ms # parallel group (16 tests): create_cast errors create_aggregate hash_func roleattributes drop_if_exists typed_table create_function_sql select infinite_recurse create_am vacuum constraints updatable_views inherit triggers ok 71 + create_aggregate 180 ms ok 72 + create_function_sql 528 ms ok 73 + create_cast 78 ms ok 74 + constraints 1812 ms ok 75 + triggers 5277 ms ok 76 + select 564 ms ok 77 + inherit 3814 ms ok 78 + typed_table 473 ms ok 79 + vacuum 1404 ms ok 80 + drop_if_exists 448 ms ok 81 + updatable_views 2400 ms ok 82 + roleattributes 234 ms ok 83 + create_am 791 ms ok 84 + hash_func 203 ms ok 85 + errors 158 ms ok 86 + infinite_recurse 674 ms ok 87 - sanity_check 222 ms # parallel group (20 tests): select_distinct_on delete case select_having namespace select_implicit select_into random prepared_xacts portals transactions union select_distinct arrays subselect hash_index update join aggregates btree_index ok 88 + select_into 310 ms ok 89 + select_distinct 973 ms ok 90 + select_distinct_on 141 ms ok 91 + select_implicit 279 ms ok 92 + select_having 213 ms ok 93 + subselect 1236 ms ok 94 + union 808 ms ok 95 + case 185 ms ok 96 + join 4455 ms ok 97 + aggregates 4610 ms ok 98 + transactions 544 ms ok 99 + random 379 ms ok 100 + portals 426 ms ok 101 + arrays 1056 ms ok 102 + btree_index 5915 ms ok 103 + hash_index 1978 ms ok 104 + update 2085 ms ok 105 + delete 165 ms ok 106 + namespace 241 ms ok 107 + prepared_xacts 399 ms # parallel group (20 tests): init_privs drop_operator security_label password tablesample object_address lock replica_identity collate matview identity groupingsets rowsecurity spgist generated gist gin brin join_hash privileges ok 108 + brin 10055 ms ok 109 + gin 4794 ms ok 110 + gist 4567 ms ok 111 + spgist 3704 ms ok 112 + privileges 10964 ms ok 113 + init_privs 74 ms ok 114 + security_label 161 ms ok 115 + collate 1066 ms ok 116 + matview 1588 ms ok 117 + lock 503 ms ok 118 + replica_identity 1057 ms ok 119 + rowsecurity 3668 ms ok 120 + object_address 501 ms ok 121 + tablesample 467 ms ok 122 + groupingsets 3158 ms ok 123 + drop_operator 139 ms ok 124 + password 345 ms ok 125 + identity 2292 ms ok 126 + generated 3850 ms ok 127 + join_hash 10100 ms # parallel group (2 tests): brin_bloom brin_multi ok 128 + brin_bloom 140 ms ok 129 + brin_multi 966 ms # parallel group (18 tests): async dbsize collate.utf8 tid alter_operator tsrf tidscan create_role sysviews tidrangescan misc_functions alter_generic incremental_sort create_table_like merge misc without_overlaps collate.icu.utf8 ok 130 + create_table_like 854 ms ok 131 + alter_generic 404 ms ok 132 + alter_operator 198 ms ok 133 + misc 1271 ms ok 134 + async 55 ms ok 135 + dbsize 56 ms ok 136 + merge 1196 ms ok 137 + misc_functions 397 ms ok 138 + sysviews 344 ms ok 139 + tsrf 220 ms ok 140 + tid 162 ms ok 141 + tidscan 235 ms ok 142 + tidrangescan 382 ms ok 143 + collate.utf8 110 ms ok 144 + collate.icu.utf8 1436 ms ok 145 + incremental_sort 841 ms ok 146 + create_role 280 ms ok 147 + without_overlaps 1424 ms # parallel group (7 tests): collate.linux.utf8 collate.windows.win1252 amutils psql_crosstab psql rules stats_ext ok 148 + rules 1248 ms ok 149 + psql 1099 ms ok 150 + psql_crosstab 108 ms ok 151 + amutils 39 ms ok 152 + stats_ext 2660 ms ok 153 + collate.linux.utf8 26 ms ok 154 + collate.windows.win1252 36 ms ok 155 - select_parallel 6631 ms ok 156 - write_parallel 238 ms ok 157 - vacuum_parallel 168 ms # parallel group (2 tests): subscription publication ok 158 + publication 1747 ms ok 159 + subscription 167 ms # parallel group (17 tests): portals_p2 combocid advisory_lock tsdicts xmlmap functional_deps equivclass guc dependency select_views window bitmapops cluster indirect_toast foreign_data tsearch foreign_key ok 160 + select_views 537 ms ok 161 + portals_p2 102 ms ok 162 + foreign_key 2637 ms ok 163 + cluster 1104 ms ok 164 + dependency 312 ms ok 165 + guc 295 ms ok 166 + bitmapops 1052 ms ok 167 + combocid 151 ms ok 168 + tsearch 2383 ms ok 169 + tsdicts 186 ms ok 170 + foreign_data 1604 ms ok 171 + window 973 ms ok 172 + xmlmap 203 ms ok 173 + functional_deps 242 ms ok 174 + advisory_lock 154 ms ok 175 + indirect_toast 1155 ms ok 176 + equivclass 272 ms # parallel group (9 tests): jsonpath_encoding json_encoding jsonpath sqljson_jsontable sqljson json jsonb_jsonpath sqljson_queryfuncs jsonb ok 177 + json 323 ms ok 178 + jsonb 780 ms ok 179 + json_encoding 48 ms ok 180 + jsonpath 122 ms ok 181 + jsonpath_encoding 46 ms ok 182 + jsonb_jsonpath 334 ms ok 183 + sqljson 170 ms ok 184 + sqljson_queryfuncs 372 ms ok 185 + sqljson_jsontable 134 ms # parallel group (18 tests): prepare plancache returning conversion limit sequence copy2 temp polymorphism truncate rowtypes largeobject with domain rangefuncs xml alter_table plpgsql ok 186 + plancache 362 ms ok 187 + limit 485 ms ok 188 + plpgsql 3770 ms ok 189 + copy2 674 ms ok 190 + temp 768 ms ok 191 + domain 1316 ms ok 192 + rangefuncs 1491 ms ok 193 + prepare 183 ms ok 194 + conversion 479 ms ok 195 + truncate 986 ms ok 196 + alter_table 3704 ms ok 197 + sequence 665 ms ok 198 + polymorphism 957 ms ok 199 + rowtypes 1003 ms ok 200 + returning 369 ms ok 201 + largeobject 1044 ms ok 202 + with 1235 ms ok 203 + xml 1829 ms # parallel group (15 tests): hash_part reloptions explain partition_info predicate compression partition_merge memoize partition_split stats partition_aggregate tuplesort partition_join indexing partition_prune ok 204 + partition_merge 1101 ms ok 205 + partition_split 1550 ms ok 206 + partition_join 2681 ms ok 207 + partition_prune 3048 ms ok 208 + reloptions 169 ms ok 209 + hash_part 141 ms ok 210 + indexing 2818 ms ok 211 + partition_aggregate 2105 ms ok 212 + partition_info 292 ms ok 213 + tuplesort 2616 ms ok 214 + explain 273 ms ok 215 + compression 907 ms ok 216 + memoize 1230 ms ok 217 + stats 1816 ms ok 218 + predicate 304 ms # parallel group (2 tests): oidjoins event_trigger ok 219 + oidjoins 571 ms not ok 220 + event_trigger 581 ms # (test process exited with exit code 2) not ok 221 - event_trigger_login 6 ms # (test process exited with exit code 2) not ok 222 - fast_default 5 ms # (test process exited with exit code 2) not ok 223 - tablespace 5 ms # (test process exited with exit code 2) 1..223 # 4 of 223 tests failed. # The differences that caused some tests to fail can be viewed in the file "/tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/regression.diffs". # A copy of the test summary that you see above is saved in the file "/tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/regression.out". === dumping /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/regression.diffs === diff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/event_trigger.out /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/results/event_trigger.out --- /tmp/cirrus-ci-build/src/test/regress/expected/event_trigger.out 2024-04-07 19:35:44.251735807 +0000 +++ /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/results/event_trigger.out 2024-04-07 19:40:11.251571400 +0000 @@ -626,119 +626,10 @@ NOTICE: REINDEX END: command_tag=REINDEX type=index identity=concur_reindex_schema.ind REINDEX SCHEMA CONCURRENTLY concur_reindex_schema; NOTICE: REINDEX END: command_tag=REINDEX type=index identity=concur_reindex_schema.ind --- One table on schema but no indexes -DROP INDEX concur_reindex_schema.ind; -REINDEX SCHEMA concur_reindex_schema; -REINDEX SCHEMA CONCURRENTLY concur_reindex_schema; -DROP SCHEMA concur_reindex_schema CASCADE; -NOTICE: drop cascades to table concur_reindex_schema.tab --- With a partitioned table, and nothing else. -CREATE TABLE concur_reindex_part (id int) PARTITION BY RANGE (id); -REINDEX TABLE concur_reindex_part; -REINDEX TABLE CONCURRENTLY concur_reindex_part; --- Partition that would be reindexed, still nothing. -CREATE TABLE concur_reindex_child PARTITION OF concur_reindex_part - FOR VALUES FROM (0) TO (10); -REINDEX TABLE concur_reindex_part; -REINDEX TABLE CONCURRENTLY concur_reindex_part; --- Now add some indexes. -CREATE INDEX concur_reindex_partidx ON concur_reindex_part (id); -REINDEX INDEX concur_reindex_partidx; -NOTICE: REINDEX END: command_tag=REINDEX type=index identity=public.concur_reindex_child_id_idx -REINDEX INDEX CONCURRENTLY concur_reindex_partidx; -NOTICE: REINDEX END: command_tag=REINDEX type=index identity=public.concur_reindex_child_id_idx -REINDEX TABLE concur_reindex_part; -NOTICE: REINDEX END: command_tag=REINDEX type=index identity=public.concur_reindex_child_id_idx -REINDEX TABLE CONCURRENTLY concur_reindex_part; -NOTICE: REINDEX END: command_tag=REINDEX type=index identity=public.concur_reindex_child_id_idx -DROP TABLE concur_reindex_part; --- Clean up -DROP EVENT TRIGGER regress_reindex_start; -DROP EVENT TRIGGER regress_reindex_end; -DROP EVENT TRIGGER regress_reindex_end_snap; -DROP FUNCTION reindex_end_command(); -DROP FUNCTION reindex_end_command_snap(); -DROP FUNCTION reindex_start_command(); -DROP TABLE concur_reindex_tab; --- test Row Security Event Trigger -RESET SESSION AUTHORIZATION; -CREATE TABLE event_trigger_test (a integer, b text); -CREATE OR REPLACE FUNCTION start_command() -RETURNS event_trigger AS $$ -BEGIN -RAISE NOTICE '% - ddl_command_start', tg_tag; -END; -$$ LANGUAGE plpgsql; -CREATE OR REPLACE FUNCTION end_command() -RETURNS event_trigger AS $$ -BEGIN -RAISE NOTICE '% - ddl_command_end', tg_tag; -END; -$$ LANGUAGE plpgsql; -CREATE OR REPLACE FUNCTION drop_sql_command() -RETURNS event_trigger AS $$ -BEGIN -RAISE NOTICE '% - sql_drop', tg_tag; -END; -$$ LANGUAGE plpgsql; -CREATE EVENT TRIGGER start_rls_command ON ddl_command_start - WHEN TAG IN ('CREATE POLICY', 'ALTER POLICY', 'DROP POLICY') EXECUTE PROCEDURE start_command(); -CREATE EVENT TRIGGER end_rls_command ON ddl_command_end - WHEN TAG IN ('CREATE POLICY', 'ALTER POLICY', 'DROP POLICY') EXECUTE PROCEDURE end_command(); -CREATE EVENT TRIGGER sql_drop_command ON sql_drop - WHEN TAG IN ('DROP POLICY') EXECUTE PROCEDURE drop_sql_command(); -CREATE POLICY p1 ON event_trigger_test USING (FALSE); -NOTICE: CREATE POLICY - ddl_command_start -NOTICE: CREATE POLICY - ddl_command_end -ALTER POLICY p1 ON event_trigger_test USING (TRUE); -NOTICE: ALTER POLICY - ddl_command_start -NOTICE: ALTER POLICY - ddl_command_end -ALTER POLICY p1 ON event_trigger_test RENAME TO p2; -NOTICE: ALTER POLICY - ddl_command_start -NOTICE: ALTER POLICY - ddl_command_end -DROP POLICY p2 ON event_trigger_test; -NOTICE: DROP POLICY - ddl_command_start -NOTICE: DROP POLICY - sql_drop -NOTICE: DROP POLICY - ddl_command_end --- Check the object addresses of all the event triggers. -SELECT - e.evtname, - pg_describe_object('pg_event_trigger'::regclass, e.oid, 0) as descr, - b.type, b.object_names, b.object_args, - pg_identify_object(a.classid, a.objid, a.objsubid) as ident - FROM pg_event_trigger as e, - LATERAL pg_identify_object_as_address('pg_event_trigger'::regclass, e.oid, 0) as b, - LATERAL pg_get_object_address(b.type, b.object_names, b.object_args) as a - ORDER BY e.evtname; - evtname | descr | type | object_names | object_args | ident --------------------+---------------------------------+---------------+---------------------+-------------+-------------------------------------------------------- - end_rls_command | event trigger end_rls_command | event trigger | {end_rls_command} | {} | ("event trigger",,end_rls_command,end_rls_command) - sql_drop_command | event trigger sql_drop_command | event trigger | {sql_drop_command} | {} | ("event trigger",,sql_drop_command,sql_drop_command) - start_rls_command | event trigger start_rls_command | event trigger | {start_rls_command} | {} | ("event trigger",,start_rls_command,start_rls_command) -(3 rows) - -DROP EVENT TRIGGER start_rls_command; -DROP EVENT TRIGGER end_rls_command; -DROP EVENT TRIGGER sql_drop_command; --- Check the GUC for disabling event triggers -CREATE FUNCTION test_event_trigger_guc() RETURNS event_trigger -LANGUAGE plpgsql AS $$ -DECLARE - obj record; -BEGIN - FOR obj IN SELECT * FROM pg_event_trigger_dropped_objects() - LOOP - RAISE NOTICE '% dropped %', tg_tag, obj.object_type; - END LOOP; -END; -$$; -CREATE EVENT TRIGGER test_event_trigger_guc - ON sql_drop - WHEN TAG IN ('DROP POLICY') EXECUTE FUNCTION test_event_trigger_guc(); -SET event_triggers = 'on'; -CREATE POLICY pguc ON event_trigger_test USING (FALSE); -DROP POLICY pguc ON event_trigger_test; -NOTICE: DROP POLICY dropped policy -CREATE POLICY pguc ON event_trigger_test USING (FALSE); -SET event_triggers = 'off'; -DROP POLICY pguc ON event_trigger_test; +WARNING: terminating connection because of crash of another server process +DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. +HINT: In a moment you should be able to reconnect to the database and repeat your command. +server closed the connection unexpectedly + This probably means the server terminated abnormally + before or while processing the request. +connection to server was lost diff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/event_trigger_login.out /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/results/event_trigger_login.out --- /tmp/cirrus-ci-build/src/test/regress/expected/event_trigger_login.out 2024-04-07 19:35:44.251735807 +0000 +++ /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/results/event_trigger_login.out 2024-04-07 19:40:11.267571381 +0000 @@ -1,39 +1,2 @@ --- Login event triggers -CREATE TABLE user_logins(id serial, who text); -GRANT SELECT ON user_logins TO public; -CREATE FUNCTION on_login_proc() RETURNS event_trigger AS $$ -BEGIN - INSERT INTO user_logins (who) VALUES (SESSION_USER); - RAISE NOTICE 'You are welcome!'; -END; -$$ LANGUAGE plpgsql; -CREATE EVENT TRIGGER on_login_trigger ON login EXECUTE PROCEDURE on_login_proc(); -ALTER EVENT TRIGGER on_login_trigger ENABLE ALWAYS; -\c -NOTICE: You are welcome! -SELECT COUNT(*) FROM user_logins; - count -------- - 1 -(1 row) - -\c -NOTICE: You are welcome! -SELECT COUNT(*) FROM user_logins; - count -------- - 2 -(1 row) - --- Check dathasloginevt in system catalog -SELECT dathasloginevt FROM pg_database WHERE datname= :'DBNAME'; - dathasloginevt ----------------- - t -(1 row) - --- Cleanup -DROP TABLE user_logins; -DROP EVENT TRIGGER on_login_trigger; -DROP FUNCTION on_login_proc(); -\c +psql: error: connection to server on socket "/tmp/GXpacC8XDJ/.s.PGSQL.52988" failed: No such file or directory + Is the server running locally and accepting connections on that socket? diff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/fast_default.out /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/results/fast_default.out --- /tmp/cirrus-ci-build/src/test/regress/expected/fast_default.out 2024-04-07 19:35:44.251735807 +0000 +++ /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/results/fast_default.out 2024-04-07 19:40:11.275571371 +0000 @@ -1,861 +1,2 @@ --- --- ALTER TABLE ADD COLUMN DEFAULT test --- -SET search_path = fast_default; -CREATE SCHEMA fast_default; -CREATE TABLE m(id OID); -INSERT INTO m VALUES (NULL::OID); -CREATE FUNCTION set(tabname name) RETURNS VOID -AS $$ -BEGIN - UPDATE m - SET id = (SELECT c.relfilenode - FROM pg_class AS c, pg_namespace AS s - WHERE c.relname = tabname - AND c.relnamespace = s.oid - AND s.nspname = 'fast_default'); -END; -$$ LANGUAGE 'plpgsql'; -CREATE FUNCTION comp() RETURNS TEXT -AS $$ -BEGIN - RETURN (SELECT CASE - WHEN m.id = c.relfilenode THEN 'Unchanged' - ELSE 'Rewritten' - END - FROM m, pg_class AS c, pg_namespace AS s - WHERE c.relname = 't' - AND c.relnamespace = s.oid - AND s.nspname = 'fast_default'); -END; -$$ LANGUAGE 'plpgsql'; -CREATE FUNCTION log_rewrite() RETURNS event_trigger -LANGUAGE plpgsql as -$func$ - -declare - this_schema text; -begin - select into this_schema relnamespace::regnamespace::text - from pg_class - where oid = pg_event_trigger_table_rewrite_oid(); - if this_schema = 'fast_default' - then - RAISE NOTICE 'rewriting table % for reason %', - pg_event_trigger_table_rewrite_oid()::regclass, - pg_event_trigger_table_rewrite_reason(); - end if; -end; -$func$; -CREATE TABLE has_volatile AS -SELECT * FROM generate_series(1,10) id; -CREATE EVENT TRIGGER has_volatile_rewrite - ON table_rewrite - EXECUTE PROCEDURE log_rewrite(); --- only the last of these should trigger a rewrite -ALTER TABLE has_volatile ADD col1 int; -ALTER TABLE has_volatile ADD col2 int DEFAULT 1; -ALTER TABLE has_volatile ADD col3 timestamptz DEFAULT current_timestamp; -ALTER TABLE has_volatile ADD col4 int DEFAULT (random() * 10000)::int; -NOTICE: rewriting table has_volatile for reason 2 --- Test a large sample of different datatypes -CREATE TABLE T(pk INT NOT NULL PRIMARY KEY, c_int INT DEFAULT 1); -SELECT set('t'); - set ------ - -(1 row) - -INSERT INTO T VALUES (1), (2); -ALTER TABLE T ADD COLUMN c_bpchar BPCHAR(5) DEFAULT 'hello', - ALTER COLUMN c_int SET DEFAULT 2; -INSERT INTO T VALUES (3), (4); -ALTER TABLE T ADD COLUMN c_text TEXT DEFAULT 'world', - ALTER COLUMN c_bpchar SET DEFAULT 'dog'; -INSERT INTO T VALUES (5), (6); -ALTER TABLE T ADD COLUMN c_date DATE DEFAULT '2016-06-02', - ALTER COLUMN c_text SET DEFAULT 'cat'; -INSERT INTO T VALUES (7), (8); -ALTER TABLE T ADD COLUMN c_timestamp TIMESTAMP DEFAULT '2016-09-01 12:00:00', - ADD COLUMN c_timestamp_null TIMESTAMP, - ALTER COLUMN c_date SET DEFAULT '2010-01-01'; -INSERT INTO T VALUES (9), (10); -ALTER TABLE T ADD COLUMN c_array TEXT[] - DEFAULT '{"This", "is", "the", "real", "world"}', - ALTER COLUMN c_timestamp SET DEFAULT '1970-12-31 11:12:13', - ALTER COLUMN c_timestamp_null SET DEFAULT '2016-09-29 12:00:00'; -INSERT INTO T VALUES (11), (12); -ALTER TABLE T ADD COLUMN c_small SMALLINT DEFAULT -5, - ADD COLUMN c_small_null SMALLINT, - ALTER COLUMN c_array - SET DEFAULT '{"This", "is", "no", "fantasy"}'; -INSERT INTO T VALUES (13), (14); -ALTER TABLE T ADD COLUMN c_big BIGINT DEFAULT 180000000000018, - ALTER COLUMN c_small SET DEFAULT 9, - ALTER COLUMN c_small_null SET DEFAULT 13; -INSERT INTO T VALUES (15), (16); -ALTER TABLE T ADD COLUMN c_num NUMERIC DEFAULT 1.00000000001, - ALTER COLUMN c_big SET DEFAULT -9999999999999999; -INSERT INTO T VALUES (17), (18); -ALTER TABLE T ADD COLUMN c_time TIME DEFAULT '12:00:00', - ALTER COLUMN c_num SET DEFAULT 2.000000000000002; -INSERT INTO T VALUES (19), (20); -ALTER TABLE T ADD COLUMN c_interval INTERVAL DEFAULT '1 day', - ALTER COLUMN c_time SET DEFAULT '23:59:59'; -INSERT INTO T VALUES (21), (22); -ALTER TABLE T ADD COLUMN c_hugetext TEXT DEFAULT repeat('abcdefg',1000), - ALTER COLUMN c_interval SET DEFAULT '3 hours'; -INSERT INTO T VALUES (23), (24); -ALTER TABLE T ALTER COLUMN c_interval DROP DEFAULT, - ALTER COLUMN c_hugetext SET DEFAULT repeat('poiuyt', 1000); -INSERT INTO T VALUES (25), (26); -ALTER TABLE T ALTER COLUMN c_bpchar DROP DEFAULT, - ALTER COLUMN c_date DROP DEFAULT, - ALTER COLUMN c_text DROP DEFAULT, - ALTER COLUMN c_timestamp DROP DEFAULT, - ALTER COLUMN c_array DROP DEFAULT, - ALTER COLUMN c_small DROP DEFAULT, - ALTER COLUMN c_big DROP DEFAULT, - ALTER COLUMN c_num DROP DEFAULT, - ALTER COLUMN c_time DROP DEFAULT, - ALTER COLUMN c_hugetext DROP DEFAULT; -INSERT INTO T VALUES (27), (28); -SELECT pk, c_int, c_bpchar, c_text, c_date, c_timestamp, - c_timestamp_null, c_array, c_small, c_small_null, - c_big, c_num, c_time, c_interval, - c_hugetext = repeat('abcdefg',1000) as c_hugetext_origdef, - c_hugetext = repeat('poiuyt', 1000) as c_hugetext_newdef -FROM T ORDER BY pk; - pk | c_int | c_bpchar | c_text | c_date | c_timestamp | c_timestamp_null | c_array | c_small | c_small_null | c_big | c_num | c_time | c_interval | c_hugetext_origdef | c_hugetext_newdef -----+-------+----------+--------+------------+--------------------------+--------------------------+--------------------------+---------+--------------+-------------------+-------------------+----------+------------+--------------------+------------------- - 1 | 1 | hello | world | 06-02-2016 | Thu Sep 01 12:00:00 2016 | | {This,is,the,real,world} | -5 | | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 2 | 1 | hello | world | 06-02-2016 | Thu Sep 01 12:00:00 2016 | | {This,is,the,real,world} | -5 | | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 3 | 2 | hello | world | 06-02-2016 | Thu Sep 01 12:00:00 2016 | | {This,is,the,real,world} | -5 | | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 4 | 2 | hello | world | 06-02-2016 | Thu Sep 01 12:00:00 2016 | | {This,is,the,real,world} | -5 | | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 5 | 2 | dog | world | 06-02-2016 | Thu Sep 01 12:00:00 2016 | | {This,is,the,real,world} | -5 | | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 6 | 2 | dog | world | 06-02-2016 | Thu Sep 01 12:00:00 2016 | | {This,is,the,real,world} | -5 | | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 7 | 2 | dog | cat | 06-02-2016 | Thu Sep 01 12:00:00 2016 | | {This,is,the,real,world} | -5 | | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 8 | 2 | dog | cat | 06-02-2016 | Thu Sep 01 12:00:00 2016 | | {This,is,the,real,world} | -5 | | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 9 | 2 | dog | cat | 01-01-2010 | Thu Sep 01 12:00:00 2016 | | {This,is,the,real,world} | -5 | | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 10 | 2 | dog | cat | 01-01-2010 | Thu Sep 01 12:00:00 2016 | | {This,is,the,real,world} | -5 | | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 11 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,the,real,world} | -5 | | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 12 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,the,real,world} | -5 | | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 13 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy} | -5 | | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 14 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy} | -5 | | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 15 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy} | 9 | 13 | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 16 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy} | 9 | 13 | 180000000000018 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 17 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy} | 9 | 13 | -9999999999999999 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 18 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy} | 9 | 13 | -9999999999999999 | 1.00000000001 | 12:00:00 | @ 1 day | t | f - 19 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy} | 9 | 13 | -9999999999999999 | 2.000000000000002 | 12:00:00 | @ 1 day | t | f - 20 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy} | 9 | 13 | -9999999999999999 | 2.000000000000002 | 12:00:00 | @ 1 day | t | f - 21 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy} | 9 | 13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | @ 1 day | t | f - 22 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy} | 9 | 13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | @ 1 day | t | f - 23 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy} | 9 | 13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | @ 3 hours | t | f - 24 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy} | 9 | 13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | @ 3 hours | t | f - 25 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy} | 9 | 13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | | f | t - 26 | 2 | dog | cat | 01-01-2010 | Thu Dec 31 11:12:13 1970 | Thu Sep 29 12:00:00 2016 | {This,is,no,fantasy} | 9 | 13 | -9999999999999999 | 2.000000000000002 | 23:59:59 | | f | t - 27 | 2 | | | | | Thu Sep 29 12:00:00 2016 | | | 13 | | | | | | - 28 | 2 | | | | | Thu Sep 29 12:00:00 2016 | | | 13 | | | | | | -(28 rows) - -SELECT comp(); - comp ------------ - Unchanged -(1 row) - -DROP TABLE T; --- Test expressions in the defaults -CREATE OR REPLACE FUNCTION foo(a INT) RETURNS TEXT AS $$ -DECLARE res TEXT := ''; - i INT; -BEGIN - i := 0; - WHILE (i < a) LOOP - res := res || chr(ascii('a') + i); - i := i + 1; - END LOOP; - RETURN res; -END; $$ LANGUAGE PLPGSQL STABLE; -CREATE TABLE T(pk INT NOT NULL PRIMARY KEY, c_int INT DEFAULT LENGTH(foo(6))); -SELECT set('t'); - set ------ - -(1 row) - -INSERT INTO T VALUES (1), (2); -ALTER TABLE T ADD COLUMN c_bpchar BPCHAR(5) DEFAULT foo(4), - ALTER COLUMN c_int SET DEFAULT LENGTH(foo(8)); -INSERT INTO T VALUES (3), (4); -ALTER TABLE T ADD COLUMN c_text TEXT DEFAULT foo(6), - ALTER COLUMN c_bpchar SET DEFAULT foo(3); -INSERT INTO T VALUES (5), (6); -ALTER TABLE T ADD COLUMN c_date DATE - DEFAULT '2016-06-02'::DATE + LENGTH(foo(10)), - ALTER COLUMN c_text SET DEFAULT foo(12); -INSERT INTO T VALUES (7), (8); -ALTER TABLE T ADD COLUMN c_timestamp TIMESTAMP - DEFAULT '2016-09-01'::DATE + LENGTH(foo(10)), - ALTER COLUMN c_date - SET DEFAULT '2010-01-01'::DATE - LENGTH(foo(4)); -INSERT INTO T VALUES (9), (10); -ALTER TABLE T ADD COLUMN c_array TEXT[] - DEFAULT ('{"This", "is", "' || foo(4) || - '","the", "real", "world"}')::TEXT[], - ALTER COLUMN c_timestamp - SET DEFAULT '1970-12-31'::DATE + LENGTH(foo(30)); -INSERT INTO T VALUES (11), (12); -ALTER TABLE T ALTER COLUMN c_int DROP DEFAULT, - ALTER COLUMN c_array - SET DEFAULT ('{"This", "is", "' || foo(1) || - '", "fantasy"}')::text[]; -INSERT INTO T VALUES (13), (14); -ALTER TABLE T ALTER COLUMN c_bpchar DROP DEFAULT, - ALTER COLUMN c_date DROP DEFAULT, - ALTER COLUMN c_text DROP DEFAULT, - ALTER COLUMN c_timestamp DROP DEFAULT, - ALTER COLUMN c_array DROP DEFAULT; -INSERT INTO T VALUES (15), (16); -SELECT * FROM T; - pk | c_int | c_bpchar | c_text | c_date | c_timestamp | c_array -----+-------+----------+--------------+------------+--------------------------+------------------------------- - 1 | 6 | abcd | abcdef | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world} - 2 | 6 | abcd | abcdef | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world} - 3 | 8 | abcd | abcdef | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world} - 4 | 8 | abcd | abcdef | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world} - 5 | 8 | abc | abcdef | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world} - 6 | 8 | abc | abcdef | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world} - 7 | 8 | abc | abcdefghijkl | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world} - 8 | 8 | abc | abcdefghijkl | 06-12-2016 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world} - 9 | 8 | abc | abcdefghijkl | 12-28-2009 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world} - 10 | 8 | abc | abcdefghijkl | 12-28-2009 | Sun Sep 11 00:00:00 2016 | {This,is,abcd,the,real,world} - 11 | 8 | abc | abcdefghijkl | 12-28-2009 | Sat Jan 30 00:00:00 1971 | {This,is,abcd,the,real,world} - 12 | 8 | abc | abcdefghijkl | 12-28-2009 | Sat Jan 30 00:00:00 1971 | {This,is,abcd,the,real,world} - 13 | | abc | abcdefghijkl | 12-28-2009 | Sat Jan 30 00:00:00 1971 | {This,is,a,fantasy} - 14 | | abc | abcdefghijkl | 12-28-2009 | Sat Jan 30 00:00:00 1971 | {This,is,a,fantasy} - 15 | | | | | | - 16 | | | | | | -(16 rows) - -SELECT comp(); - comp ------------ - Unchanged -(1 row) - -DROP TABLE T; -DROP FUNCTION foo(INT); --- Fall back to full rewrite for volatile expressions -CREATE TABLE T(pk INT NOT NULL PRIMARY KEY); -INSERT INTO T VALUES (1); -SELECT set('t'); - set ------ - -(1 row) - --- now() is stable, because it returns the transaction timestamp -ALTER TABLE T ADD COLUMN c1 TIMESTAMP DEFAULT now(); -SELECT comp(); - comp ------------ - Unchanged -(1 row) - --- clock_timestamp() is volatile -ALTER TABLE T ADD COLUMN c2 TIMESTAMP DEFAULT clock_timestamp(); -NOTICE: rewriting table t for reason 2 -SELECT comp(); - comp ------------ - Rewritten -(1 row) - --- check that we notice insertion of a volatile default argument -CREATE FUNCTION foolme(timestamptz DEFAULT clock_timestamp()) - RETURNS timestamptz - IMMUTABLE AS 'select $1' LANGUAGE sql; -ALTER TABLE T ADD COLUMN c3 timestamptz DEFAULT foolme(); -NOTICE: rewriting table t for reason 2 -SELECT attname, atthasmissing, attmissingval FROM pg_attribute - WHERE attrelid = 't'::regclass AND attnum > 0 - ORDER BY attnum; - attname | atthasmissing | attmissingval ----------+---------------+--------------- - pk | f | - c1 | f | - c2 | f | - c3 | f | -(4 rows) - -DROP TABLE T; -DROP FUNCTION foolme(timestamptz); --- Simple querie -CREATE TABLE T (pk INT NOT NULL PRIMARY KEY); -SELECT set('t'); - set ------ - -(1 row) - -INSERT INTO T SELECT * FROM generate_series(1, 10) a; -ALTER TABLE T ADD COLUMN c_bigint BIGINT NOT NULL DEFAULT -1; -INSERT INTO T SELECT b, b - 10 FROM generate_series(11, 20) a(b); -ALTER TABLE T ADD COLUMN c_text TEXT DEFAULT 'hello'; -INSERT INTO T SELECT b, b - 10, (b + 10)::text FROM generate_series(21, 30) a(b); --- WHERE clause -SELECT c_bigint, c_text FROM T WHERE c_bigint = -1 LIMIT 1; - c_bigint | c_text -----------+-------- - -1 | hello -(1 row) - -EXPLAIN (VERBOSE TRUE, COSTS FALSE) -SELECT c_bigint, c_text FROM T WHERE c_bigint = -1 LIMIT 1; - QUERY PLAN ----------------------------------------------- - Limit - Output: c_bigint, c_text - -> Seq Scan on fast_default.t - Output: c_bigint, c_text - Filter: (t.c_bigint = '-1'::integer) -(5 rows) - -SELECT c_bigint, c_text FROM T WHERE c_text = 'hello' LIMIT 1; - c_bigint | c_text -----------+-------- - -1 | hello -(1 row) - -EXPLAIN (VERBOSE TRUE, COSTS FALSE) SELECT c_bigint, c_text FROM T WHERE c_text = 'hello' LIMIT 1; - QUERY PLAN --------------------------------------------- - Limit - Output: c_bigint, c_text - -> Seq Scan on fast_default.t - Output: c_bigint, c_text - Filter: (t.c_text = 'hello'::text) -(5 rows) - --- COALESCE -SELECT COALESCE(c_bigint, pk), COALESCE(c_text, pk::text) -FROM T -ORDER BY pk LIMIT 10; - coalesce | coalesce -----------+---------- - -1 | hello - -1 | hello - -1 | hello - -1 | hello - -1 | hello - -1 | hello - -1 | hello - -1 | hello - -1 | hello - -1 | hello -(10 rows) - --- Aggregate function -SELECT SUM(c_bigint), MAX(c_text COLLATE "C" ), MIN(c_text COLLATE "C") FROM T; - sum | max | min ------+-------+----- - 200 | hello | 31 -(1 row) - --- ORDER BY -SELECT * FROM T ORDER BY c_bigint, c_text, pk LIMIT 10; - pk | c_bigint | c_text -----+----------+-------- - 1 | -1 | hello - 2 | -1 | hello - 3 | -1 | hello - 4 | -1 | hello - 5 | -1 | hello - 6 | -1 | hello - 7 | -1 | hello - 8 | -1 | hello - 9 | -1 | hello - 10 | -1 | hello -(10 rows) - -EXPLAIN (VERBOSE TRUE, COSTS FALSE) -SELECT * FROM T ORDER BY c_bigint, c_text, pk LIMIT 10; - QUERY PLAN ----------------------------------------------- - Limit - Output: pk, c_bigint, c_text - -> Sort - Output: pk, c_bigint, c_text - Sort Key: t.c_bigint, t.c_text, t.pk - -> Seq Scan on fast_default.t - Output: pk, c_bigint, c_text -(7 rows) - --- LIMIT -SELECT * FROM T WHERE c_bigint > -1 ORDER BY c_bigint, c_text, pk LIMIT 10; - pk | c_bigint | c_text -----+----------+-------- - 11 | 1 | hello - 12 | 2 | hello - 13 | 3 | hello - 14 | 4 | hello - 15 | 5 | hello - 16 | 6 | hello - 17 | 7 | hello - 18 | 8 | hello - 19 | 9 | hello - 20 | 10 | hello -(10 rows) - -EXPLAIN (VERBOSE TRUE, COSTS FALSE) -SELECT * FROM T WHERE c_bigint > -1 ORDER BY c_bigint, c_text, pk LIMIT 10; - QUERY PLAN ----------------------------------------------------- - Limit - Output: pk, c_bigint, c_text - -> Sort - Output: pk, c_bigint, c_text - Sort Key: t.c_bigint, t.c_text, t.pk - -> Seq Scan on fast_default.t - Output: pk, c_bigint, c_text - Filter: (t.c_bigint > '-1'::integer) -(8 rows) - --- DELETE with RETURNING -DELETE FROM T WHERE pk BETWEEN 10 AND 20 RETURNING *; - pk | c_bigint | c_text -----+----------+-------- - 10 | -1 | hello - 11 | 1 | hello - 12 | 2 | hello - 13 | 3 | hello - 14 | 4 | hello - 15 | 5 | hello - 16 | 6 | hello - 17 | 7 | hello - 18 | 8 | hello - 19 | 9 | hello - 20 | 10 | hello -(11 rows) - -EXPLAIN (VERBOSE TRUE, COSTS FALSE) -DELETE FROM T WHERE pk BETWEEN 10 AND 20 RETURNING *; - QUERY PLAN ------------------------------------------------------------ - Delete on fast_default.t - Output: pk, c_bigint, c_text - -> Bitmap Heap Scan on fast_default.t - Output: ctid - Recheck Cond: ((t.pk >= 10) AND (t.pk <= 20)) - -> Bitmap Index Scan on t_pkey - Index Cond: ((t.pk >= 10) AND (t.pk <= 20)) -(7 rows) - --- UPDATE -UPDATE T SET c_text = '"' || c_text || '"' WHERE pk < 10; -SELECT * FROM T WHERE c_text LIKE '"%"' ORDER BY PK; - pk | c_bigint | c_text -----+----------+--------- - 1 | -1 | "hello" - 2 | -1 | "hello" - 3 | -1 | "hello" - 4 | -1 | "hello" - 5 | -1 | "hello" - 6 | -1 | "hello" - 7 | -1 | "hello" - 8 | -1 | "hello" - 9 | -1 | "hello" -(9 rows) - -SELECT comp(); - comp ------------ - Unchanged -(1 row) - -DROP TABLE T; --- Combine with other DDL -CREATE TABLE T(pk INT NOT NULL PRIMARY KEY); -SELECT set('t'); - set ------ - -(1 row) - -INSERT INTO T VALUES (1), (2); -ALTER TABLE T ADD COLUMN c_int INT NOT NULL DEFAULT -1; -INSERT INTO T VALUES (3), (4); -ALTER TABLE T ADD COLUMN c_text TEXT DEFAULT 'Hello'; -INSERT INTO T VALUES (5), (6); -ALTER TABLE T ALTER COLUMN c_text SET DEFAULT 'world', - ALTER COLUMN c_int SET DEFAULT 1; -INSERT INTO T VALUES (7), (8); -SELECT * FROM T ORDER BY pk; - pk | c_int | c_text -----+-------+-------- - 1 | -1 | Hello - 2 | -1 | Hello - 3 | -1 | Hello - 4 | -1 | Hello - 5 | -1 | Hello - 6 | -1 | Hello - 7 | 1 | world - 8 | 1 | world -(8 rows) - --- Add an index -CREATE INDEX i ON T(c_int, c_text); -SELECT c_text FROM T WHERE c_int = -1; - c_text --------- - Hello - Hello - Hello - Hello - Hello - Hello -(6 rows) - -SELECT comp(); - comp ------------ - Unchanged -(1 row) - --- query to exercise expand_tuple function -CREATE TABLE t1 AS -SELECT 1::int AS a , 2::int AS b -FROM generate_series(1,20) q; -ALTER TABLE t1 ADD COLUMN c text; -SELECT a, - stddev(cast((SELECT sum(1) FROM generate_series(1,20) x) AS float4)) - OVER (PARTITION BY a,b,c ORDER BY b) - AS z -FROM t1; - a | z ----+--- - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 - 1 | 0 -(20 rows) - -DROP TABLE T; --- test that we account for missing columns without defaults correctly --- in expand_tuple, and that rows are correctly expanded for triggers -CREATE FUNCTION test_trigger() -RETURNS trigger -LANGUAGE plpgsql -AS $$ - -begin - raise notice 'old tuple: %', to_json(OLD)::text; - if TG_OP = 'DELETE' - then - return OLD; - else - return NEW; - end if; -end; - -$$; --- 2 new columns, both have defaults -CREATE TABLE t (id serial PRIMARY KEY, a int, b int, c int); -INSERT INTO t (a,b,c) VALUES (1,2,3); -ALTER TABLE t ADD COLUMN x int NOT NULL DEFAULT 4; -ALTER TABLE t ADD COLUMN y int NOT NULL DEFAULT 5; -CREATE TRIGGER a BEFORE UPDATE ON t FOR EACH ROW EXECUTE PROCEDURE test_trigger(); -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | 3 | 4 | 5 -(1 row) - -UPDATE t SET y = 2; -NOTICE: old tuple: {"id":1,"a":1,"b":2,"c":3,"x":4,"y":5} -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | 3 | 4 | 2 -(1 row) - -DROP TABLE t; --- 2 new columns, first has default -CREATE TABLE t (id serial PRIMARY KEY, a int, b int, c int); -INSERT INTO t (a,b,c) VALUES (1,2,3); -ALTER TABLE t ADD COLUMN x int NOT NULL DEFAULT 4; -ALTER TABLE t ADD COLUMN y int; -CREATE TRIGGER a BEFORE UPDATE ON t FOR EACH ROW EXECUTE PROCEDURE test_trigger(); -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | 3 | 4 | -(1 row) - -UPDATE t SET y = 2; -NOTICE: old tuple: {"id":1,"a":1,"b":2,"c":3,"x":4,"y":null} -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | 3 | 4 | 2 -(1 row) - -DROP TABLE t; --- 2 new columns, second has default -CREATE TABLE t (id serial PRIMARY KEY, a int, b int, c int); -INSERT INTO t (a,b,c) VALUES (1,2,3); -ALTER TABLE t ADD COLUMN x int; -ALTER TABLE t ADD COLUMN y int NOT NULL DEFAULT 5; -CREATE TRIGGER a BEFORE UPDATE ON t FOR EACH ROW EXECUTE PROCEDURE test_trigger(); -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | 3 | | 5 -(1 row) - -UPDATE t SET y = 2; -NOTICE: old tuple: {"id":1,"a":1,"b":2,"c":3,"x":null,"y":5} -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | 3 | | 2 -(1 row) - -DROP TABLE t; --- 2 new columns, neither has default -CREATE TABLE t (id serial PRIMARY KEY, a int, b int, c int); -INSERT INTO t (a,b,c) VALUES (1,2,3); -ALTER TABLE t ADD COLUMN x int; -ALTER TABLE t ADD COLUMN y int; -CREATE TRIGGER a BEFORE UPDATE ON t FOR EACH ROW EXECUTE PROCEDURE test_trigger(); -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | 3 | | -(1 row) - -UPDATE t SET y = 2; -NOTICE: old tuple: {"id":1,"a":1,"b":2,"c":3,"x":null,"y":null} -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | 3 | | 2 -(1 row) - -DROP TABLE t; --- same as last 4 tests but here the last original column has a NULL value --- 2 new columns, both have defaults -CREATE TABLE t (id serial PRIMARY KEY, a int, b int, c int); -INSERT INTO t (a,b,c) VALUES (1,2,NULL); -ALTER TABLE t ADD COLUMN x int NOT NULL DEFAULT 4; -ALTER TABLE t ADD COLUMN y int NOT NULL DEFAULT 5; -CREATE TRIGGER a BEFORE UPDATE ON t FOR EACH ROW EXECUTE PROCEDURE test_trigger(); -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | | 4 | 5 -(1 row) - -UPDATE t SET y = 2; -NOTICE: old tuple: {"id":1,"a":1,"b":2,"c":null,"x":4,"y":5} -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | | 4 | 2 -(1 row) - -DROP TABLE t; --- 2 new columns, first has default -CREATE TABLE t (id serial PRIMARY KEY, a int, b int, c int); -INSERT INTO t (a,b,c) VALUES (1,2,NULL); -ALTER TABLE t ADD COLUMN x int NOT NULL DEFAULT 4; -ALTER TABLE t ADD COLUMN y int; -CREATE TRIGGER a BEFORE UPDATE ON t FOR EACH ROW EXECUTE PROCEDURE test_trigger(); -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | | 4 | -(1 row) - -UPDATE t SET y = 2; -NOTICE: old tuple: {"id":1,"a":1,"b":2,"c":null,"x":4,"y":null} -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | | 4 | 2 -(1 row) - -DROP TABLE t; --- 2 new columns, second has default -CREATE TABLE t (id serial PRIMARY KEY, a int, b int, c int); -INSERT INTO t (a,b,c) VALUES (1,2,NULL); -ALTER TABLE t ADD COLUMN x int; -ALTER TABLE t ADD COLUMN y int NOT NULL DEFAULT 5; -CREATE TRIGGER a BEFORE UPDATE ON t FOR EACH ROW EXECUTE PROCEDURE test_trigger(); -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | | | 5 -(1 row) - -UPDATE t SET y = 2; -NOTICE: old tuple: {"id":1,"a":1,"b":2,"c":null,"x":null,"y":5} -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | | | 2 -(1 row) - -DROP TABLE t; --- 2 new columns, neither has default -CREATE TABLE t (id serial PRIMARY KEY, a int, b int, c int); -INSERT INTO t (a,b,c) VALUES (1,2,NULL); -ALTER TABLE t ADD COLUMN x int; -ALTER TABLE t ADD COLUMN y int; -CREATE TRIGGER a BEFORE UPDATE ON t FOR EACH ROW EXECUTE PROCEDURE test_trigger(); -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | | | -(1 row) - -UPDATE t SET y = 2; -NOTICE: old tuple: {"id":1,"a":1,"b":2,"c":null,"x":null,"y":null} -SELECT * FROM t; - id | a | b | c | x | y -----+---+---+---+---+--- - 1 | 1 | 2 | | | 2 -(1 row) - -DROP TABLE t; --- make sure expanded tuple has correct self pointer --- it will be required by the RI trigger doing the cascading delete -CREATE TABLE leader (a int PRIMARY KEY, b int); -CREATE TABLE follower (a int REFERENCES leader ON DELETE CASCADE, b int); -INSERT INTO leader VALUES (1, 1), (2, 2); -ALTER TABLE leader ADD c int; -ALTER TABLE leader DROP c; -DELETE FROM leader; --- check that ALTER TABLE ... ALTER TYPE does the right thing -CREATE TABLE vtype( a integer); -INSERT INTO vtype VALUES (1); -ALTER TABLE vtype ADD COLUMN b DOUBLE PRECISION DEFAULT 0.2; -ALTER TABLE vtype ADD COLUMN c BOOLEAN DEFAULT true; -SELECT * FROM vtype; - a | b | c ----+-----+--- - 1 | 0.2 | t -(1 row) - -ALTER TABLE vtype - ALTER b TYPE text USING b::text, - ALTER c TYPE text USING c::text; -NOTICE: rewriting table vtype for reason 4 -SELECT * FROM vtype; - a | b | c ----+-----+------ - 1 | 0.2 | true -(1 row) - --- also check the case that doesn't rewrite the table -CREATE TABLE vtype2 (a int); -INSERT INTO vtype2 VALUES (1); -ALTER TABLE vtype2 ADD COLUMN b varchar(10) DEFAULT 'xxx'; -ALTER TABLE vtype2 ALTER COLUMN b SET DEFAULT 'yyy'; -INSERT INTO vtype2 VALUES (2); -ALTER TABLE vtype2 ALTER COLUMN b TYPE varchar(20) USING b::varchar(20); -SELECT * FROM vtype2; - a | b ----+----- - 1 | xxx - 2 | yyy -(2 rows) - --- Ensure that defaults are checked when evaluating whether HOT update --- is possible, this was broken for a while: --- https://postgr.es/m/20190202133521.ylauh3ckqa7colzj%40alap3.anarazel.de -BEGIN; -CREATE TABLE t(); -INSERT INTO t DEFAULT VALUES; -ALTER TABLE t ADD COLUMN a int DEFAULT 1; -CREATE INDEX ON t(a); --- set column with a default 1 to NULL, due to a bug that wasn't --- noticed has heap_getattr buggily returned NULL for default columns -UPDATE t SET a = NULL; --- verify that index and non-index scans show the same result -SET LOCAL enable_seqscan = true; -SELECT * FROM t WHERE a IS NULL; - a ---- - -(1 row) - -SET LOCAL enable_seqscan = false; -SELECT * FROM t WHERE a IS NULL; - a ---- - -(1 row) - -ROLLBACK; --- verify that a default set on a non-plain table doesn't set a missing --- value on the attribute -CREATE FOREIGN DATA WRAPPER dummy; -CREATE SERVER s0 FOREIGN DATA WRAPPER dummy; -CREATE FOREIGN TABLE ft1 (c1 integer NOT NULL) SERVER s0; -ALTER FOREIGN TABLE ft1 ADD COLUMN c8 integer DEFAULT 0; -ALTER FOREIGN TABLE ft1 ALTER COLUMN c8 TYPE char(10); -SELECT count(*) - FROM pg_attribute - WHERE attrelid = 'ft1'::regclass AND - (attmissingval IS NOT NULL OR atthasmissing); - count -------- - 0 -(1 row) - --- cleanup -DROP FOREIGN TABLE ft1; -DROP SERVER s0; -DROP FOREIGN DATA WRAPPER dummy; -DROP TABLE vtype; -DROP TABLE vtype2; -DROP TABLE follower; -DROP TABLE leader; -DROP FUNCTION test_trigger(); -DROP TABLE t1; -DROP FUNCTION set(name); -DROP FUNCTION comp(); -DROP TABLE m; -DROP TABLE has_volatile; -DROP EVENT TRIGGER has_volatile_rewrite; -DROP FUNCTION log_rewrite; -DROP SCHEMA fast_default; --- Leave a table with an active fast default in place, for pg_upgrade testing -set search_path = public; -create table has_fast_default(f1 int); -insert into has_fast_default values(1); -alter table has_fast_default add column f2 int default 42; -table has_fast_default; - f1 | f2 -----+---- - 1 | 42 -(1 row) - +psql: error: connection to server on socket "/tmp/GXpacC8XDJ/.s.PGSQL.52988" failed: No such file or directory + Is the server running locally and accepting connections on that socket? diff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/tablespace.out /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/results/tablespace.out --- /tmp/cirrus-ci-build/src/test/regress/expected/tablespace.out 2024-04-07 19:35:44.315735778 +0000 +++ /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/results/tablespace.out 2024-04-07 19:40:11.283571361 +0000 @@ -1,968 +1,2 @@ --- relative tablespace locations are not allowed -CREATE TABLESPACE regress_tblspace LOCATION 'relative'; -- fail -ERROR: tablespace location must be an absolute path --- empty tablespace locations are not usually allowed -CREATE TABLESPACE regress_tblspace LOCATION ''; -- fail -ERROR: tablespace location must be an absolute path --- as a special developer-only option to allow us to use tablespaces --- with streaming replication on the same server, an empty location --- can be allowed as a way to say that the tablespace should be created --- as a directory in pg_tblspc, rather than being a symlink -SET allow_in_place_tablespaces = true; --- create a tablespace using WITH clause -CREATE TABLESPACE regress_tblspacewith LOCATION '' WITH (some_nonexistent_parameter = true); -- fail -ERROR: unrecognized parameter "some_nonexistent_parameter" -CREATE TABLESPACE regress_tblspacewith LOCATION '' WITH (random_page_cost = 3.0); -- ok --- check to see the parameter was used -SELECT spcoptions FROM pg_tablespace WHERE spcname = 'regress_tblspacewith'; - spcoptions ------------------------- - {random_page_cost=3.0} -(1 row) - --- drop the tablespace so we can re-use the location -DROP TABLESPACE regress_tblspacewith; --- This returns a relative path as of an effect of allow_in_place_tablespaces, --- masking the tablespace OID used in the path name. -SELECT regexp_replace(pg_tablespace_location(oid), '(pg_tblspc)/(\d+)', '\1/NNN') - FROM pg_tablespace WHERE spcname = 'regress_tblspace'; - regexp_replace ----------------- - pg_tblspc/NNN -(1 row) - --- try setting and resetting some properties for the new tablespace -ALTER TABLESPACE regress_tblspace SET (random_page_cost = 1.0, seq_page_cost = 1.1); -ALTER TABLESPACE regress_tblspace SET (some_nonexistent_parameter = true); -- fail -ERROR: unrecognized parameter "some_nonexistent_parameter" -ALTER TABLESPACE regress_tblspace RESET (random_page_cost = 2.0); -- fail -ERROR: RESET must not include values for parameters -ALTER TABLESPACE regress_tblspace RESET (random_page_cost, effective_io_concurrency); -- ok --- REINDEX (TABLESPACE) --- catalogs and system tablespaces --- system catalog, fail -REINDEX (TABLESPACE regress_tblspace) TABLE pg_am; -ERROR: cannot move system relation "pg_am_name_index" -REINDEX (TABLESPACE regress_tblspace) TABLE CONCURRENTLY pg_am; -ERROR: cannot reindex system catalogs concurrently --- shared catalog, fail -REINDEX (TABLESPACE regress_tblspace) TABLE pg_authid; -ERROR: cannot move system relation "pg_authid_rolname_index" -REINDEX (TABLESPACE regress_tblspace) TABLE CONCURRENTLY pg_authid; -ERROR: cannot reindex system catalogs concurrently --- toast relations, fail -REINDEX (TABLESPACE regress_tblspace) INDEX pg_toast.pg_toast_1260_index; -ERROR: cannot move system relation "pg_toast_1260_index" -REINDEX (TABLESPACE regress_tblspace) INDEX CONCURRENTLY pg_toast.pg_toast_1260_index; -ERROR: cannot reindex system catalogs concurrently -REINDEX (TABLESPACE regress_tblspace) TABLE pg_toast.pg_toast_1260; -ERROR: cannot move system relation "pg_toast_1260_index" -REINDEX (TABLESPACE regress_tblspace) TABLE CONCURRENTLY pg_toast.pg_toast_1260; -ERROR: cannot reindex system catalogs concurrently --- system catalog, fail -REINDEX (TABLESPACE pg_global) TABLE pg_authid; -ERROR: cannot move system relation "pg_authid_rolname_index" -REINDEX (TABLESPACE pg_global) TABLE CONCURRENTLY pg_authid; -ERROR: cannot reindex system catalogs concurrently --- table with toast relation -CREATE TABLE regress_tblspace_test_tbl (num1 bigint, num2 double precision, t text); -INSERT INTO regress_tblspace_test_tbl (num1, num2, t) - SELECT round(random()*100), random(), 'text' - FROM generate_series(1, 10) s(i); -CREATE INDEX regress_tblspace_test_tbl_idx ON regress_tblspace_test_tbl (num1); --- move to global tablespace, fail -REINDEX (TABLESPACE pg_global) INDEX regress_tblspace_test_tbl_idx; -ERROR: only shared relations can be placed in pg_global tablespace -REINDEX (TABLESPACE pg_global) INDEX CONCURRENTLY regress_tblspace_test_tbl_idx; -ERROR: cannot move non-shared relation to tablespace "pg_global" --- check transactional behavior of REINDEX (TABLESPACE) -BEGIN; -REINDEX (TABLESPACE regress_tblspace) INDEX regress_tblspace_test_tbl_idx; -REINDEX (TABLESPACE regress_tblspace) TABLE regress_tblspace_test_tbl; -ROLLBACK; --- no relation moved to the new tablespace -SELECT c.relname FROM pg_class c, pg_tablespace s - WHERE c.reltablespace = s.oid AND s.spcname = 'regress_tblspace'; - relname ---------- -(0 rows) - --- check that all indexes are moved to a new tablespace with different --- relfilenode. --- Save first the existing relfilenode for the toast and main relations. -SELECT relfilenode as main_filenode FROM pg_class - WHERE relname = 'regress_tblspace_test_tbl_idx' \gset -SELECT relfilenode as toast_filenode FROM pg_class - WHERE oid = - (SELECT i.indexrelid - FROM pg_class c, - pg_index i - WHERE i.indrelid = c.reltoastrelid AND - c.relname = 'regress_tblspace_test_tbl') \gset -REINDEX (TABLESPACE regress_tblspace) TABLE regress_tblspace_test_tbl; -SELECT c.relname FROM pg_class c, pg_tablespace s - WHERE c.reltablespace = s.oid AND s.spcname = 'regress_tblspace' - ORDER BY c.relname; - relname -------------------------------- - regress_tblspace_test_tbl_idx -(1 row) - -ALTER TABLE regress_tblspace_test_tbl SET TABLESPACE regress_tblspace; -ALTER TABLE regress_tblspace_test_tbl SET TABLESPACE pg_default; -SELECT c.relname FROM pg_class c, pg_tablespace s - WHERE c.reltablespace = s.oid AND s.spcname = 'regress_tblspace' - ORDER BY c.relname; - relname -------------------------------- - regress_tblspace_test_tbl_idx -(1 row) - --- Move back to the default tablespace. -ALTER INDEX regress_tblspace_test_tbl_idx SET TABLESPACE pg_default; -SELECT c.relname FROM pg_class c, pg_tablespace s - WHERE c.reltablespace = s.oid AND s.spcname = 'regress_tblspace' - ORDER BY c.relname; - relname ---------- -(0 rows) - -REINDEX (TABLESPACE regress_tblspace, CONCURRENTLY) TABLE regress_tblspace_test_tbl; -SELECT c.relname FROM pg_class c, pg_tablespace s - WHERE c.reltablespace = s.oid AND s.spcname = 'regress_tblspace' - ORDER BY c.relname; - relname -------------------------------- - regress_tblspace_test_tbl_idx -(1 row) - -SELECT relfilenode = :main_filenode AS main_same FROM pg_class - WHERE relname = 'regress_tblspace_test_tbl_idx'; - main_same ------------ - f -(1 row) - -SELECT relfilenode = :toast_filenode as toast_same FROM pg_class - WHERE oid = - (SELECT i.indexrelid - FROM pg_class c, - pg_index i - WHERE i.indrelid = c.reltoastrelid AND - c.relname = 'regress_tblspace_test_tbl'); - toast_same ------------- - f -(1 row) - -DROP TABLE regress_tblspace_test_tbl; --- REINDEX (TABLESPACE) with partitions --- Create a partition tree and check the set of relations reindexed --- with their new tablespace. -CREATE TABLE tbspace_reindex_part (c1 int, c2 int) PARTITION BY RANGE (c1); -CREATE TABLE tbspace_reindex_part_0 PARTITION OF tbspace_reindex_part - FOR VALUES FROM (0) TO (10) PARTITION BY list (c2); -CREATE TABLE tbspace_reindex_part_0_1 PARTITION OF tbspace_reindex_part_0 - FOR VALUES IN (1); -CREATE TABLE tbspace_reindex_part_0_2 PARTITION OF tbspace_reindex_part_0 - FOR VALUES IN (2); --- This partitioned table will have no partitions. -CREATE TABLE tbspace_reindex_part_10 PARTITION OF tbspace_reindex_part - FOR VALUES FROM (10) TO (20) PARTITION BY list (c2); --- Create some partitioned indexes -CREATE INDEX tbspace_reindex_part_index ON ONLY tbspace_reindex_part (c1); -CREATE INDEX tbspace_reindex_part_index_0 ON ONLY tbspace_reindex_part_0 (c1); -ALTER INDEX tbspace_reindex_part_index ATTACH PARTITION tbspace_reindex_part_index_0; --- This partitioned index will have no partitions. -CREATE INDEX tbspace_reindex_part_index_10 ON ONLY tbspace_reindex_part_10 (c1); -ALTER INDEX tbspace_reindex_part_index ATTACH PARTITION tbspace_reindex_part_index_10; -CREATE INDEX tbspace_reindex_part_index_0_1 ON ONLY tbspace_reindex_part_0_1 (c1); -ALTER INDEX tbspace_reindex_part_index_0 ATTACH PARTITION tbspace_reindex_part_index_0_1; -CREATE INDEX tbspace_reindex_part_index_0_2 ON ONLY tbspace_reindex_part_0_2 (c1); -ALTER INDEX tbspace_reindex_part_index_0 ATTACH PARTITION tbspace_reindex_part_index_0_2; -SELECT relid, parentrelid, level FROM pg_partition_tree('tbspace_reindex_part_index') - ORDER BY relid, level; - relid | parentrelid | level ---------------------------------+------------------------------+------- - tbspace_reindex_part_index | | 0 - tbspace_reindex_part_index_0 | tbspace_reindex_part_index | 1 - tbspace_reindex_part_index_10 | tbspace_reindex_part_index | 1 - tbspace_reindex_part_index_0_1 | tbspace_reindex_part_index_0 | 2 - tbspace_reindex_part_index_0_2 | tbspace_reindex_part_index_0 | 2 -(5 rows) - --- Track the original tablespace, relfilenode and OID of each index --- in the tree. -CREATE TEMP TABLE reindex_temp_before AS - SELECT oid, relname, relfilenode, reltablespace - FROM pg_class - WHERE relname ~ 'tbspace_reindex_part_index'; -REINDEX (TABLESPACE regress_tblspace, CONCURRENTLY) TABLE tbspace_reindex_part; --- REINDEX CONCURRENTLY changes the OID of the old relation, hence a check --- based on the relation name below. -SELECT b.relname, - CASE WHEN a.relfilenode = b.relfilenode THEN 'relfilenode is unchanged' - ELSE 'relfilenode has changed' END AS filenode, - CASE WHEN a.reltablespace = b.reltablespace THEN 'reltablespace is unchanged' - ELSE 'reltablespace has changed' END AS tbspace - FROM reindex_temp_before b JOIN pg_class a ON b.relname = a.relname - ORDER BY 1; - relname | filenode | tbspace ---------------------------------+--------------------------+---------------------------- - tbspace_reindex_part_index | relfilenode is unchanged | reltablespace is unchanged - tbspace_reindex_part_index_0 | relfilenode is unchanged | reltablespace is unchanged - tbspace_reindex_part_index_0_1 | relfilenode has changed | reltablespace has changed - tbspace_reindex_part_index_0_2 | relfilenode has changed | reltablespace has changed - tbspace_reindex_part_index_10 | relfilenode is unchanged | reltablespace is unchanged -(5 rows) - -DROP TABLE tbspace_reindex_part; --- create a schema we can use -CREATE SCHEMA testschema; --- try a table -CREATE TABLE testschema.foo (i int) TABLESPACE regress_tblspace; -SELECT relname, spcname FROM pg_catalog.pg_tablespace t, pg_catalog.pg_class c - where c.reltablespace = t.oid AND c.relname = 'foo'; - relname | spcname ----------+------------------ - foo | regress_tblspace -(1 row) - -INSERT INTO testschema.foo VALUES(1); -INSERT INTO testschema.foo VALUES(2); --- tables from dynamic sources -CREATE TABLE testschema.asselect TABLESPACE regress_tblspace AS SELECT 1; -SELECT relname, spcname FROM pg_catalog.pg_tablespace t, pg_catalog.pg_class c - where c.reltablespace = t.oid AND c.relname = 'asselect'; - relname | spcname -----------+------------------ - asselect | regress_tblspace -(1 row) - -PREPARE selectsource(int) AS SELECT $1; -CREATE TABLE testschema.asexecute TABLESPACE regress_tblspace - AS EXECUTE selectsource(2); -SELECT relname, spcname FROM pg_catalog.pg_tablespace t, pg_catalog.pg_class c - where c.reltablespace = t.oid AND c.relname = 'asexecute'; - relname | spcname ------------+------------------ - asexecute | regress_tblspace -(1 row) - --- index -CREATE INDEX foo_idx on testschema.foo(i) TABLESPACE regress_tblspace; -SELECT relname, spcname FROM pg_catalog.pg_tablespace t, pg_catalog.pg_class c - where c.reltablespace = t.oid AND c.relname = 'foo_idx'; - relname | spcname ----------+------------------ - foo_idx | regress_tblspace -(1 row) - --- check \d output -\d testschema.foo - Table "testschema.foo" - Column | Type | Collation | Nullable | Default ---------+---------+-----------+----------+--------- - i | integer | | | -Indexes: - "foo_idx" btree (i), tablespace "regress_tblspace" -Tablespace: "regress_tblspace" - -\d testschema.foo_idx - Index "testschema.foo_idx" - Column | Type | Key? | Definition ---------+---------+------+------------ - i | integer | yes | i -btree, for table "testschema.foo" -Tablespace: "regress_tblspace" - --- --- partitioned table --- -CREATE TABLE testschema.part (a int) PARTITION BY LIST (a); -SET default_tablespace TO pg_global; -CREATE TABLE testschema.part_1 PARTITION OF testschema.part FOR VALUES IN (1); -ERROR: only shared relations can be placed in pg_global tablespace -RESET default_tablespace; -CREATE TABLE testschema.part_1 PARTITION OF testschema.part FOR VALUES IN (1); -SET default_tablespace TO regress_tblspace; -CREATE TABLE testschema.part_2 PARTITION OF testschema.part FOR VALUES IN (2); -SET default_tablespace TO pg_global; -CREATE TABLE testschema.part_3 PARTITION OF testschema.part FOR VALUES IN (3); -ERROR: only shared relations can be placed in pg_global tablespace -ALTER TABLE testschema.part SET TABLESPACE regress_tblspace; -CREATE TABLE testschema.part_3 PARTITION OF testschema.part FOR VALUES IN (3); -CREATE TABLE testschema.part_4 PARTITION OF testschema.part FOR VALUES IN (4) - TABLESPACE pg_default; -CREATE TABLE testschema.part_56 PARTITION OF testschema.part FOR VALUES IN (5, 6) - PARTITION BY LIST (a); -ALTER TABLE testschema.part SET TABLESPACE pg_default; -CREATE TABLE testschema.part_78 PARTITION OF testschema.part FOR VALUES IN (7, 8) - PARTITION BY LIST (a); -ERROR: only shared relations can be placed in pg_global tablespace -CREATE TABLE testschema.part_910 PARTITION OF testschema.part FOR VALUES IN (9, 10) - PARTITION BY LIST (a) TABLESPACE regress_tblspace; -RESET default_tablespace; -CREATE TABLE testschema.part_78 PARTITION OF testschema.part FOR VALUES IN (7, 8) - PARTITION BY LIST (a); -SELECT relname, spcname FROM pg_catalog.pg_class c - JOIN pg_catalog.pg_namespace n ON (c.relnamespace = n.oid) - LEFT JOIN pg_catalog.pg_tablespace t ON c.reltablespace = t.oid - where c.relname LIKE 'part%' AND n.nspname = 'testschema' order by relname; - relname | spcname -----------+------------------ - part | - part_1 | - part_2 | regress_tblspace - part_3 | regress_tblspace - part_4 | - part_56 | regress_tblspace - part_78 | - part_910 | regress_tblspace -(8 rows) - -RESET default_tablespace; -DROP TABLE testschema.part; --- partitioned index -CREATE TABLE testschema.part (a int) PARTITION BY LIST (a); -CREATE TABLE testschema.part1 PARTITION OF testschema.part FOR VALUES IN (1); -CREATE INDEX part_a_idx ON testschema.part (a) TABLESPACE regress_tblspace; -CREATE TABLE testschema.part2 PARTITION OF testschema.part FOR VALUES IN (2); -SELECT relname, spcname FROM pg_catalog.pg_tablespace t, pg_catalog.pg_class c - where c.reltablespace = t.oid AND c.relname LIKE 'part%_idx' ORDER BY relname; - relname | spcname --------------+------------------ - part1_a_idx | regress_tblspace - part2_a_idx | regress_tblspace - part_a_idx | regress_tblspace -(3 rows) - -\d testschema.part - Partitioned table "testschema.part" - Column | Type | Collation | Nullable | Default ---------+---------+-----------+----------+--------- - a | integer | | | -Partition key: LIST (a) -Indexes: - "part_a_idx" btree (a), tablespace "regress_tblspace" -Number of partitions: 2 (Use \d+ to list them.) - -\d+ testschema.part - Partitioned table "testschema.part" - Column | Type | Collation | Nullable | Default | Storage | Stats target | Description ---------+---------+-----------+----------+---------+---------+--------------+------------- - a | integer | | | | plain | | -Partition key: LIST (a) -Indexes: - "part_a_idx" btree (a), tablespace "regress_tblspace" -Partitions: testschema.part1 FOR VALUES IN (1), - testschema.part2 FOR VALUES IN (2) - -\d testschema.part1 - Table "testschema.part1" - Column | Type | Collation | Nullable | Default ---------+---------+-----------+----------+--------- - a | integer | | | -Partition of: testschema.part FOR VALUES IN (1) -Indexes: - "part1_a_idx" btree (a), tablespace "regress_tblspace" - -\d+ testschema.part1 - Table "testschema.part1" - Column | Type | Collation | Nullable | Default | Storage | Stats target | Description ---------+---------+-----------+----------+---------+---------+--------------+------------- - a | integer | | | | plain | | -Partition of: testschema.part FOR VALUES IN (1) -Partition constraint: ((a IS NOT NULL) AND (a = 1)) -Indexes: - "part1_a_idx" btree (a), tablespace "regress_tblspace" - -\d testschema.part_a_idx -Partitioned index "testschema.part_a_idx" - Column | Type | Key? | Definition ---------+---------+------+------------ - a | integer | yes | a -btree, for table "testschema.part" -Number of partitions: 2 (Use \d+ to list them.) -Tablespace: "regress_tblspace" - -\d+ testschema.part_a_idx - Partitioned index "testschema.part_a_idx" - Column | Type | Key? | Definition | Storage | Stats target ---------+---------+------+------------+---------+-------------- - a | integer | yes | a | plain | -btree, for table "testschema.part" -Partitions: testschema.part1_a_idx, - testschema.part2_a_idx -Tablespace: "regress_tblspace" - --- partitioned rels cannot specify the default tablespace. These fail: -CREATE TABLE testschema.dflt (a int PRIMARY KEY) PARTITION BY LIST (a) TABLESPACE pg_default; -ERROR: cannot specify default tablespace for partitioned relations -CREATE TABLE testschema.dflt (a int PRIMARY KEY USING INDEX TABLESPACE pg_default) PARTITION BY LIST (a); -ERROR: cannot specify default tablespace for partitioned relations -SET default_tablespace TO 'pg_default'; -CREATE TABLE testschema.dflt (a int PRIMARY KEY) PARTITION BY LIST (a) TABLESPACE regress_tblspace; -ERROR: cannot specify default tablespace for partitioned relations -CREATE TABLE testschema.dflt (a int PRIMARY KEY USING INDEX TABLESPACE regress_tblspace) PARTITION BY LIST (a); -ERROR: cannot specify default tablespace for partitioned relations --- but these work: -CREATE TABLE testschema.dflt (a int PRIMARY KEY USING INDEX TABLESPACE regress_tblspace) PARTITION BY LIST (a) TABLESPACE regress_tblspace; -SET default_tablespace TO ''; -CREATE TABLE testschema.dflt2 (a int PRIMARY KEY) PARTITION BY LIST (a); -DROP TABLE testschema.dflt, testschema.dflt2; --- check that default_tablespace doesn't affect ALTER TABLE index rebuilds -CREATE TABLE testschema.test_default_tab(id bigint) TABLESPACE regress_tblspace; -INSERT INTO testschema.test_default_tab VALUES (1); -CREATE INDEX test_index1 on testschema.test_default_tab (id); -CREATE INDEX test_index2 on testschema.test_default_tab (id) TABLESPACE regress_tblspace; -ALTER TABLE testschema.test_default_tab ADD CONSTRAINT test_index3 PRIMARY KEY (id); -ALTER TABLE testschema.test_default_tab ADD CONSTRAINT test_index4 UNIQUE (id) USING INDEX TABLESPACE regress_tblspace; -\d testschema.test_index1 - Index "testschema.test_index1" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -btree, for table "testschema.test_default_tab" - -\d testschema.test_index2 - Index "testschema.test_index2" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -btree, for table "testschema.test_default_tab" -Tablespace: "regress_tblspace" - -\d testschema.test_index3 - Index "testschema.test_index3" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -primary key, btree, for table "testschema.test_default_tab" - -\d testschema.test_index4 - Index "testschema.test_index4" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -unique, btree, for table "testschema.test_default_tab" -Tablespace: "regress_tblspace" - --- use a custom tablespace for default_tablespace -SET default_tablespace TO regress_tblspace; --- tablespace should not change if no rewrite -ALTER TABLE testschema.test_default_tab ALTER id TYPE bigint; -\d testschema.test_index1 - Index "testschema.test_index1" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -btree, for table "testschema.test_default_tab" - -\d testschema.test_index2 - Index "testschema.test_index2" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -btree, for table "testschema.test_default_tab" -Tablespace: "regress_tblspace" - -\d testschema.test_index3 - Index "testschema.test_index3" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -primary key, btree, for table "testschema.test_default_tab" - -\d testschema.test_index4 - Index "testschema.test_index4" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -unique, btree, for table "testschema.test_default_tab" -Tablespace: "regress_tblspace" - -SELECT * FROM testschema.test_default_tab; - id ----- - 1 -(1 row) - --- tablespace should not change even if there is an index rewrite -ALTER TABLE testschema.test_default_tab ALTER id TYPE int; -\d testschema.test_index1 - Index "testschema.test_index1" - Column | Type | Key? | Definition ---------+---------+------+------------ - id | integer | yes | id -btree, for table "testschema.test_default_tab" - -\d testschema.test_index2 - Index "testschema.test_index2" - Column | Type | Key? | Definition ---------+---------+------+------------ - id | integer | yes | id -btree, for table "testschema.test_default_tab" -Tablespace: "regress_tblspace" - -\d testschema.test_index3 - Index "testschema.test_index3" - Column | Type | Key? | Definition ---------+---------+------+------------ - id | integer | yes | id -primary key, btree, for table "testschema.test_default_tab" - -\d testschema.test_index4 - Index "testschema.test_index4" - Column | Type | Key? | Definition ---------+---------+------+------------ - id | integer | yes | id -unique, btree, for table "testschema.test_default_tab" -Tablespace: "regress_tblspace" - -SELECT * FROM testschema.test_default_tab; - id ----- - 1 -(1 row) - --- now use the default tablespace for default_tablespace -SET default_tablespace TO ''; --- tablespace should not change if no rewrite -ALTER TABLE testschema.test_default_tab ALTER id TYPE int; -\d testschema.test_index1 - Index "testschema.test_index1" - Column | Type | Key? | Definition ---------+---------+------+------------ - id | integer | yes | id -btree, for table "testschema.test_default_tab" - -\d testschema.test_index2 - Index "testschema.test_index2" - Column | Type | Key? | Definition ---------+---------+------+------------ - id | integer | yes | id -btree, for table "testschema.test_default_tab" -Tablespace: "regress_tblspace" - -\d testschema.test_index3 - Index "testschema.test_index3" - Column | Type | Key? | Definition ---------+---------+------+------------ - id | integer | yes | id -primary key, btree, for table "testschema.test_default_tab" - -\d testschema.test_index4 - Index "testschema.test_index4" - Column | Type | Key? | Definition ---------+---------+------+------------ - id | integer | yes | id -unique, btree, for table "testschema.test_default_tab" -Tablespace: "regress_tblspace" - --- tablespace should not change even if there is an index rewrite -ALTER TABLE testschema.test_default_tab ALTER id TYPE bigint; -\d testschema.test_index1 - Index "testschema.test_index1" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -btree, for table "testschema.test_default_tab" - -\d testschema.test_index2 - Index "testschema.test_index2" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -btree, for table "testschema.test_default_tab" -Tablespace: "regress_tblspace" - -\d testschema.test_index3 - Index "testschema.test_index3" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -primary key, btree, for table "testschema.test_default_tab" - -\d testschema.test_index4 - Index "testschema.test_index4" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -unique, btree, for table "testschema.test_default_tab" -Tablespace: "regress_tblspace" - -DROP TABLE testschema.test_default_tab; --- check that default_tablespace doesn't affect ALTER TABLE index rebuilds --- (this time with a partitioned table) -CREATE TABLE testschema.test_default_tab_p(id bigint, val bigint) - PARTITION BY LIST (id) TABLESPACE regress_tblspace; -CREATE TABLE testschema.test_default_tab_p1 PARTITION OF testschema.test_default_tab_p - FOR VALUES IN (1); -INSERT INTO testschema.test_default_tab_p VALUES (1); -CREATE INDEX test_index1 on testschema.test_default_tab_p (val); -CREATE INDEX test_index2 on testschema.test_default_tab_p (val) TABLESPACE regress_tblspace; -ALTER TABLE testschema.test_default_tab_p ADD CONSTRAINT test_index3 PRIMARY KEY (id); -ALTER TABLE testschema.test_default_tab_p ADD CONSTRAINT test_index4 UNIQUE (id) USING INDEX TABLESPACE regress_tblspace; -\d testschema.test_index1 -Partitioned index "testschema.test_index1" - Column | Type | Key? | Definition ---------+--------+------+------------ - val | bigint | yes | val -btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) - -\d testschema.test_index2 -Partitioned index "testschema.test_index2" - Column | Type | Key? | Definition ---------+--------+------+------------ - val | bigint | yes | val -btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) -Tablespace: "regress_tblspace" - -\d testschema.test_index3 -Partitioned index "testschema.test_index3" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -primary key, btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) - -\d testschema.test_index4 -Partitioned index "testschema.test_index4" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -unique, btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) -Tablespace: "regress_tblspace" - --- use a custom tablespace for default_tablespace -SET default_tablespace TO regress_tblspace; --- tablespace should not change if no rewrite -ALTER TABLE testschema.test_default_tab_p ALTER val TYPE bigint; -\d testschema.test_index1 -Partitioned index "testschema.test_index1" - Column | Type | Key? | Definition ---------+--------+------+------------ - val | bigint | yes | val -btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) - -\d testschema.test_index2 -Partitioned index "testschema.test_index2" - Column | Type | Key? | Definition ---------+--------+------+------------ - val | bigint | yes | val -btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) -Tablespace: "regress_tblspace" - -\d testschema.test_index3 -Partitioned index "testschema.test_index3" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -primary key, btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) - -\d testschema.test_index4 -Partitioned index "testschema.test_index4" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -unique, btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) -Tablespace: "regress_tblspace" - -SELECT * FROM testschema.test_default_tab_p; - id | val -----+----- - 1 | -(1 row) - --- tablespace should not change even if there is an index rewrite -ALTER TABLE testschema.test_default_tab_p ALTER val TYPE int; -\d testschema.test_index1 -Partitioned index "testschema.test_index1" - Column | Type | Key? | Definition ---------+---------+------+------------ - val | integer | yes | val -btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) - -\d testschema.test_index2 -Partitioned index "testschema.test_index2" - Column | Type | Key? | Definition ---------+---------+------+------------ - val | integer | yes | val -btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) -Tablespace: "regress_tblspace" - -\d testschema.test_index3 -Partitioned index "testschema.test_index3" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -primary key, btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) - -\d testschema.test_index4 -Partitioned index "testschema.test_index4" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -unique, btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) -Tablespace: "regress_tblspace" - -SELECT * FROM testschema.test_default_tab_p; - id | val -----+----- - 1 | -(1 row) - --- now use the default tablespace for default_tablespace -SET default_tablespace TO ''; --- tablespace should not change if no rewrite -ALTER TABLE testschema.test_default_tab_p ALTER val TYPE int; -\d testschema.test_index1 -Partitioned index "testschema.test_index1" - Column | Type | Key? | Definition ---------+---------+------+------------ - val | integer | yes | val -btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) - -\d testschema.test_index2 -Partitioned index "testschema.test_index2" - Column | Type | Key? | Definition ---------+---------+------+------------ - val | integer | yes | val -btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) -Tablespace: "regress_tblspace" - -\d testschema.test_index3 -Partitioned index "testschema.test_index3" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -primary key, btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) - -\d testschema.test_index4 -Partitioned index "testschema.test_index4" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -unique, btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) -Tablespace: "regress_tblspace" - --- tablespace should not change even if there is an index rewrite -ALTER TABLE testschema.test_default_tab_p ALTER val TYPE bigint; -\d testschema.test_index1 -Partitioned index "testschema.test_index1" - Column | Type | Key? | Definition ---------+--------+------+------------ - val | bigint | yes | val -btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) - -\d testschema.test_index2 -Partitioned index "testschema.test_index2" - Column | Type | Key? | Definition ---------+--------+------+------------ - val | bigint | yes | val -btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) -Tablespace: "regress_tblspace" - -\d testschema.test_index3 -Partitioned index "testschema.test_index3" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -primary key, btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) - -\d testschema.test_index4 -Partitioned index "testschema.test_index4" - Column | Type | Key? | Definition ---------+--------+------+------------ - id | bigint | yes | id -unique, btree, for table "testschema.test_default_tab_p" -Number of partitions: 1 (Use \d+ to list them.) -Tablespace: "regress_tblspace" - -DROP TABLE testschema.test_default_tab_p; --- check that default_tablespace affects index additions in ALTER TABLE -CREATE TABLE testschema.test_tab(id int) TABLESPACE regress_tblspace; -INSERT INTO testschema.test_tab VALUES (1); -SET default_tablespace TO regress_tblspace; -ALTER TABLE testschema.test_tab ADD CONSTRAINT test_tab_unique UNIQUE (id); -SET default_tablespace TO ''; -ALTER TABLE testschema.test_tab ADD CONSTRAINT test_tab_pkey PRIMARY KEY (id); -\d testschema.test_tab_unique - Index "testschema.test_tab_unique" - Column | Type | Key? | Definition ---------+---------+------+------------ - id | integer | yes | id -unique, btree, for table "testschema.test_tab" -Tablespace: "regress_tblspace" - -\d testschema.test_tab_pkey - Index "testschema.test_tab_pkey" - Column | Type | Key? | Definition ---------+---------+------+------------ - id | integer | yes | id -primary key, btree, for table "testschema.test_tab" - -SELECT * FROM testschema.test_tab; - id ----- - 1 -(1 row) - -DROP TABLE testschema.test_tab; --- check that default_tablespace is handled correctly by multi-command --- ALTER TABLE that includes a tablespace-preserving rewrite -CREATE TABLE testschema.test_tab(a int, b int, c int); -SET default_tablespace TO regress_tblspace; -ALTER TABLE testschema.test_tab ADD CONSTRAINT test_tab_unique UNIQUE (a); -CREATE INDEX test_tab_a_idx ON testschema.test_tab (a); -SET default_tablespace TO ''; -CREATE INDEX test_tab_b_idx ON testschema.test_tab (b); -\d testschema.test_tab_unique - Index "testschema.test_tab_unique" - Column | Type | Key? | Definition ---------+---------+------+------------ - a | integer | yes | a -unique, btree, for table "testschema.test_tab" -Tablespace: "regress_tblspace" - -\d testschema.test_tab_a_idx - Index "testschema.test_tab_a_idx" - Column | Type | Key? | Definition ---------+---------+------+------------ - a | integer | yes | a -btree, for table "testschema.test_tab" -Tablespace: "regress_tblspace" - -\d testschema.test_tab_b_idx - Index "testschema.test_tab_b_idx" - Column | Type | Key? | Definition ---------+---------+------+------------ - b | integer | yes | b -btree, for table "testschema.test_tab" - -ALTER TABLE testschema.test_tab ALTER b TYPE bigint, ADD UNIQUE (c); -\d testschema.test_tab_unique - Index "testschema.test_tab_unique" - Column | Type | Key? | Definition ---------+---------+------+------------ - a | integer | yes | a -unique, btree, for table "testschema.test_tab" -Tablespace: "regress_tblspace" - -\d testschema.test_tab_a_idx - Index "testschema.test_tab_a_idx" - Column | Type | Key? | Definition ---------+---------+------+------------ - a | integer | yes | a -btree, for table "testschema.test_tab" -Tablespace: "regress_tblspace" - -\d testschema.test_tab_b_idx - Index "testschema.test_tab_b_idx" - Column | Type | Key? | Definition ---------+--------+------+------------ - b | bigint | yes | b -btree, for table "testschema.test_tab" - -DROP TABLE testschema.test_tab; --- let's try moving a table from one place to another -CREATE TABLE testschema.atable AS VALUES (1), (2); -CREATE UNIQUE INDEX anindex ON testschema.atable(column1); -ALTER TABLE testschema.atable SET TABLESPACE regress_tblspace; -ALTER INDEX testschema.anindex SET TABLESPACE regress_tblspace; -ALTER INDEX testschema.part_a_idx SET TABLESPACE pg_global; -ERROR: only shared relations can be placed in pg_global tablespace -ALTER INDEX testschema.part_a_idx SET TABLESPACE pg_default; -ALTER INDEX testschema.part_a_idx SET TABLESPACE regress_tblspace; -INSERT INTO testschema.atable VALUES(3); -- ok -INSERT INTO testschema.atable VALUES(1); -- fail (checks index) -ERROR: duplicate key value violates unique constraint "anindex" -DETAIL: Key (column1)=(1) already exists. -SELECT COUNT(*) FROM testschema.atable; -- checks heap - count -------- - 3 -(1 row) - --- let's try moving a materialized view from one place to another -CREATE MATERIALIZED VIEW testschema.amv AS SELECT * FROM testschema.atable; -ALTER MATERIALIZED VIEW testschema.amv SET TABLESPACE regress_tblspace; -REFRESH MATERIALIZED VIEW testschema.amv; -SELECT COUNT(*) FROM testschema.amv; - count -------- - 3 -(1 row) - --- Will fail with bad path -CREATE TABLESPACE regress_badspace LOCATION '/no/such/location'; -ERROR: directory "/no/such/location" does not exist --- No such tablespace -CREATE TABLE bar (i int) TABLESPACE regress_nosuchspace; -ERROR: tablespace "regress_nosuchspace" does not exist --- Fail, in use for some partitioned object -DROP TABLESPACE regress_tblspace; -ERROR: tablespace "regress_tblspace" cannot be dropped because some objects depend on it -DETAIL: tablespace for index testschema.part_a_idx -ALTER INDEX testschema.part_a_idx SET TABLESPACE pg_default; --- Fail, not empty -DROP TABLESPACE regress_tblspace; -ERROR: tablespace "regress_tblspace" is not empty -CREATE ROLE regress_tablespace_user1 login; -CREATE ROLE regress_tablespace_user2 login; -GRANT USAGE ON SCHEMA testschema TO regress_tablespace_user2; -ALTER TABLESPACE regress_tblspace OWNER TO regress_tablespace_user1; -CREATE TABLE testschema.tablespace_acl (c int); --- new owner lacks permission to create this index from scratch -CREATE INDEX k ON testschema.tablespace_acl (c) TABLESPACE regress_tblspace; -ALTER TABLE testschema.tablespace_acl OWNER TO regress_tablespace_user2; -SET SESSION ROLE regress_tablespace_user2; -CREATE TABLE tablespace_table (i int) TABLESPACE regress_tblspace; -- fail -ERROR: permission denied for tablespace regress_tblspace -ALTER TABLE testschema.tablespace_acl ALTER c TYPE bigint; -REINDEX (TABLESPACE regress_tblspace) TABLE tablespace_table; -- fail -ERROR: permission denied for tablespace regress_tblspace -REINDEX (TABLESPACE regress_tblspace, CONCURRENTLY) TABLE tablespace_table; -- fail -ERROR: permission denied for tablespace regress_tblspace -RESET ROLE; -ALTER TABLESPACE regress_tblspace RENAME TO regress_tblspace_renamed; -ALTER TABLE ALL IN TABLESPACE regress_tblspace_renamed SET TABLESPACE pg_default; -ALTER INDEX ALL IN TABLESPACE regress_tblspace_renamed SET TABLESPACE pg_default; -ALTER MATERIALIZED VIEW ALL IN TABLESPACE regress_tblspace_renamed SET TABLESPACE pg_default; --- Should show notice that nothing was done -ALTER TABLE ALL IN TABLESPACE regress_tblspace_renamed SET TABLESPACE pg_default; -NOTICE: no matching relations in tablespace "regress_tblspace_renamed" found -ALTER MATERIALIZED VIEW ALL IN TABLESPACE regress_tblspace_renamed SET TABLESPACE pg_default; -NOTICE: no matching relations in tablespace "regress_tblspace_renamed" found --- Should succeed -DROP TABLESPACE regress_tblspace_renamed; -DROP SCHEMA testschema CASCADE; -NOTICE: drop cascades to 7 other objects -DETAIL: drop cascades to table testschema.foo -drop cascades to table testschema.asselect -drop cascades to table testschema.asexecute -drop cascades to table testschema.part -drop cascades to table testschema.atable -drop cascades to materialized view testschema.amv -drop cascades to table testschema.tablespace_acl -DROP ROLE regress_tablespace_user1; -DROP ROLE regress_tablespace_user2; +psql: error: connection to server on socket "/tmp/GXpacC8XDJ/.s.PGSQL.52988" failed: No such file or directory + Is the server running locally and accepting connections on that socket? === EOF === [19:40:11.290](60.277s) not ok 2 - regression tests pass [19:40:11.290](0.000s) # Failed test 'regression tests pass' # at /tmp/cirrus-ci-build/src/test/recovery/t/027_stream_regress.pl line 95. [19:40:11.291](0.000s) # got: '256' # expected: '0' psql: error: connection to server on socket "/tmp/GXpacC8XDJ/.s.PGSQL.52988" failed: No such file or directory Is the server running locally and accepting connections on that socket? connection error: 'psql: error: connection to server on socket "/tmp/GXpacC8XDJ/.s.PGSQL.52988" failed: No such file or directory Is the server running locally and accepting connections on that socket?' while running 'psql -XAtq -d port=52988 host=/tmp/GXpacC8XDJ dbname='postgres' -f - -v ON_ERROR_STOP=1' at /tmp/cirrus-ci-build/src/test/perl/PostgreSQL/Test/Cluster.pm line 2040. # No postmaster PID for node "primary" # Postmaster PID for node "standby_1" is 27776 ### Stopping node "standby_1" using mode immediate # Running: pg_ctl -D /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_standby_1_data/pgdata -m immediate stop waiting for server to shut down.... done server stopped # No postmaster PID for node "standby_1" [19:40:11.418](0.127s) # Tests were run but no plan was declared and done_testing() was not seen. [19:40:11.418](0.000s) # Looks like your test exited with 4 just after 2.