# Checking port 50034 # Found port 50034 Name: primary Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/archives Connection string: port=50034 host=C:/Windows/TEMP/ONsP2ixWOG Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log [23:10:04.698](0.112s) # initializing database system by copying initdb template # Running: robocopy /E /NJS /NJH /NFL /NDL /NP C:/cirrus/build/tmp_install/initdb-template C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata # Running: C:\cirrus\build\src/test\regress\pg_regress.exe --config-auth C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata ### Starting node "primary" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log -o --cluster-name=primary start waiting for server to start.... done server started # Postmaster PID for node "primary" is 5332 (standby_1,) [23:10:06.180](1.482s) ok 1 - physical slot created on primary # Taking pg_basebackup my_backup from node "primary" # Running: pg_basebackup -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup/my_backup -h C:/Windows/TEMP/ONsP2ixWOG -p 50034 --checkpoint fast --no-sync # Backup finished # Checking port 50035 # Found port 50035 Name: standby_1 Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/archives Connection string: port=50035 host=C:/Windows/TEMP/ONsP2ixWOG Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log # Initializing node "standby_1" from backup "my_backup" of node "primary" ### Enabling streaming replication for node "standby_1" ### Starting node "standby_1" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log -o --cluster-name=standby_1 start waiting for server to start.... done server started # Postmaster PID for node "standby_1" is 472 # using postmaster on C:/Windows/TEMP/ONsP2ixWOG, port 50034 ok 1 - test_setup 692 ms # parallel group (20 tests): int4 varchar float4 pg_lsn boolean char money text oid int2 txid name bit regproc float8 uuid int8 enum rangetypes numeric ok 2 + boolean 870 ms ok 3 + char 869 ms ok 4 + name 873 ms ok 5 + varchar 632 ms ok 6 + text 866 ms ok 7 + int2 867 ms ok 8 + int4 627 ms ok 9 + int8 1157 ms ok 10 + oid 861 ms ok 11 + float4 624 ms ok 12 + float8 1102 ms ok 13 + bit 860 ms ok 14 + numeric 1639 ms ok 15 + txid 856 ms ok 16 + uuid 1137 ms ok 17 + enum 1148 ms ok 18 + money 848 ms ok 19 + rangetypes 1591 ms ok 20 + pg_lsn 843 ms ok 21 + regproc 848 ms # parallel group (20 tests): lseg macaddr line strings md5 macaddr8 inet point time circle interval path date timestamp numerology timetz polygon box timestamptz multirangetypes ok 22 + strings 829 ms ok 23 + md5 828 ms ok 24 + numerology 1316 ms ok 25 + point 903 ms ok 26 + lseg 669 ms ok 27 + line 822 ms ok 28 + box 1332 ms ok 29 + path 1200 ms ok 30 + polygon 1329 ms ok 31 + circle 998 ms ok 32 + date 1243 ms ok 33 + time 954 ms ok 34 + timetz 1303 ms ok 35 + timestamp 1284 ms ok 36 + timestamptz 1464 ms ok 37 + interval 1059 ms ok 38 + inet 883 ms ok 39 + macaddr 708 ms ok 40 + macaddr8 802 ms ok 41 + multirangetypes 1764 ms # parallel group (12 tests): misc_sanity type_sanity geometry comments unicode tstypes horology expressions mvcc opr_sanity xid regex ok 42 + geometry 1093 ms ok 43 + horology 1214 ms ok 44 + tstypes 1171 ms ok 45 + regex 1368 ms ok 46 + type_sanity 1057 ms ok 47 + opr_sanity 1329 ms ok 48 + misc_sanity 982 ms ok 49 + comments 1082 ms ok 50 + expressions 1211 ms ok 51 + unicode 1159 ms ok 52 + xid 1346 ms ok 53 + mvcc 1207 ms # parallel group (5 tests): copyselect copydml copy insert_conflict insert ok 54 + copy 530 ms ok 55 + copyselect 304 ms ok 56 + copydml 329 ms ok 57 + insert 1409 ms ok 58 + insert_conflict 853 ms # parallel group (7 tests): create_type create_function_c create_schema create_operator create_procedure create_misc create_table ok 59 + create_function_c 340 ms ok 60 + create_misc 1298 ms ok 61 + create_operator 382 ms ok 62 + create_procedure 385 ms ok 63 + create_table 1989 ms ok 64 + create_type 329 ms ok 65 + create_schema 376 ms # parallel group (5 tests): index_including create_view index_including_gist create_index_spgist create_index ok 66 + create_index 2736 ms ok 67 + create_index_spgist 1792 ms ok 68 + create_view 1300 ms ok 69 + index_including 1130 ms ok 70 + index_including_gist 1413 ms # parallel group (16 tests): create_cast infinite_recurse select create_aggregate create_function_sql errors roleattributes hash_func drop_if_exists typed_table create_am constraints vacuum updatable_views inherit triggers ok 71 + create_aggregate 1779 ms ok 72 + create_function_sql 1786 ms ok 73 + create_cast 1215 ms ok 74 + constraints 2874 ms ok 75 + triggers 5681 ms ok 76 + select 1770 ms ok 77 + inherit 5196 ms ok 78 + typed_table 2356 ms ok 79 + vacuum 3685 ms ok 80 + drop_if_exists 2265 ms ok 81 + updatable_views 4274 ms ok 82 + roleattributes 2209 ms ok 83 + create_am 2834 ms ok 84 + hash_func 2206 ms ok 85 + errors 2060 ms ok 86 + infinite_recurse 1491 ms ok 87 - sanity_check 692 ms # parallel group (20 tests): select_having select_distinct_on select_implicit delete select_into random case prepared_xacts select_distinct namespace transactions arrays subselect union portals update hash_index join aggregates btree_index ok 88 + select_into 1240 ms ok 89 + select_distinct 1349 ms ok 90 + select_distinct_on 975 ms ok 91 + select_implicit 974 ms ok 92 + select_having 972 ms ok 93 + subselect 2254 ms ok 94 + union 2253 ms ok 95 + case 1260 ms ok 96 + join 3970 ms ok 97 + aggregates 4522 ms ok 98 + transactions 2033 ms ok 99 + random 1243 ms ok 100 + portals 2365 ms ok 101 + arrays 2194 ms ok 102 + btree_index 6963 ms ok 103 + hash_index 2812 ms ok 104 + update 2811 ms ok 105 + delete 980 ms ok 106 + namespace 1762 ms ok 107 + prepared_xacts 1243 ms # parallel group (20 tests): init_privs security_label lock password collate tablesample spgist drop_operator object_address replica_identity gin groupingsets rowsecurity identity matview generated gist brin join_hash privileges ok 108 + brin 10393 ms ok 109 + gin 3479 ms ok 110 + gist 4781 ms ok 111 + spgist 3177 ms ok 112 + privileges 11641 ms ok 113 + init_privs 863 ms ok 114 + security_label 1881 ms ok 115 + collate 2454 ms ok 116 + matview 3957 ms ok 117 + lock 2059 ms ok 118 + replica_identity 3461 ms ok 119 + rowsecurity 3872 ms ok 120 + object_address 3389 ms ok 121 + tablesample 2743 ms ok 122 + groupingsets 3458 ms ok 123 + drop_operator 3184 ms ok 124 + password 2225 ms ok 125 + identity 3938 ms ok 126 + generated 4670 ms ok 127 + join_hash 10370 ms # parallel group (2 tests): brin_bloom brin_multi ok 128 + brin_bloom 293 ms ok 129 + brin_multi 1489 ms # parallel group (18 tests): misc alter_generic tid tidrangescan sysviews collate.utf8 collate.icu.utf8 create_role async tsrf tidscan misc_functions dbsize alter_operator merge incremental_sort create_table_like without_overlaps ok 130 + create_table_like 1442 ms ok 131 + alter_generic 694 ms ok 132 + alter_operator 1110 ms ok 133 + misc 691 ms ok 134 + async 1028 ms ok 135 + dbsize 1026 ms ok 136 + merge 1234 ms ok 137 + misc_functions 1023 ms ok 138 + sysviews 781 ms ok 139 + tsrf 1021 ms ok 140 + tid 681 ms ok 141 + tidscan 1018 ms ok 142 + tidrangescan 678 ms ok 143 + collate.utf8 857 ms ok 144 + collate.icu.utf8 985 ms ok 145 + incremental_sort 1378 ms ok 146 + create_role 1011 ms ok 147 + without_overlaps 2649 ms # parallel group (7 tests): psql_crosstab collate.linux.utf8 amutils collate.windows.win1252 rules psql stats_ext ok 148 + rules 2190 ms ok 149 + psql 2188 ms ok 150 + psql_crosstab 426 ms ok 151 + amutils 543 ms ok 152 + stats_ext 5508 ms ok 153 + collate.linux.utf8 491 ms ok 154 + collate.windows.win1252 1854 ms ok 155 - select_parallel 7408 ms ok 156 - write_parallel 781 ms ok 157 - vacuum_parallel 467 ms # parallel group (2 tests): subscription publication ok 158 + publication 1613 ms ok 159 + subscription 325 ms # parallel group (17 tests): equivclass select_views dependency advisory_lock window xmlmap tsdicts functional_deps portals_p2 tsearch combocid indirect_toast guc cluster bitmapops foreign_data foreign_key ok 160 + select_views 1131 ms ok 161 + portals_p2 1923 ms ok 162 + foreign_key 4254 ms ok 163 + cluster 2207 ms ok 164 + dependency 1444 ms ok 165 + guc 2199 ms ok 166 + bitmapops 2532 ms ok 167 + combocid 2119 ms ok 168 + tsearch 2069 ms ok 169 + tsdicts 1436 ms ok 170 + foreign_data 2922 ms ok 171 + window 1432 ms ok 172 + xmlmap 1430 ms ok 173 + functional_deps 1901 ms ok 174 + advisory_lock 1425 ms ok 175 + indirect_toast 2159 ms ok 176 + equivclass 1101 ms # parallel group (8 tests): jsonpath json_encoding jsonpath_encoding sqljson json jsonb_jsonpath sqljson_queryfuncs jsonb ok 177 + json 795 ms ok 178 + jsonb 1084 ms ok 179 + json_encoding 380 ms ok 180 + jsonpath 298 ms ok 181 + jsonpath_encoding 425 ms ok 182 + jsonb_jsonpath 803 ms ok 183 + sqljson 452 ms ok 184 + sqljson_queryfuncs 801 ms # parallel group (18 tests): plancache returning prepare largeobject truncate rowtypes conversion with limit copy2 sequence rangefuncs xml polymorphism temp domain plpgsql alter_table ok 185 + plancache 965 ms ok 186 + limit 2513 ms ok 187 + plpgsql 3828 ms ok 188 + copy2 2510 ms ok 189 + temp 2570 ms ok 190 + domain 2700 ms ok 191 + rangefuncs 2565 ms ok 192 + prepare 2009 ms ok 193 + conversion 2446 ms ok 194 + truncate 2006 ms ok 195 + alter_table 5255 ms ok 196 + sequence 2499 ms ok 197 + polymorphism 2557 ms ok 198 + rowtypes 2439 ms ok 199 + returning 1999 ms ok 200 + largeobject 1997 ms ok 201 + with 2492 ms ok 202 + xml 2550 ms # parallel group (13 tests): hash_part reloptions predicate partition_info compression memoize explain indexing partition_join stats tuplesort partition_aggregate partition_prune not ok 203 + partition_join 3896 ms ok 204 + partition_prune 7718 ms ok 205 + reloptions 1574 ms ok 206 + hash_part 1036 ms ok 207 + indexing 3845 ms ok 208 + partition_aggregate 5203 ms ok 209 + partition_info 2027 ms ok 210 + tuplesort 4642 ms ok 211 + explain 2855 ms ok 212 + compression 2374 ms ok 213 + memoize 2845 ms ok 214 + stats 4425 ms ok 215 + predicate 1802 ms # parallel group (2 tests): oidjoins event_trigger ok 216 + oidjoins 681 ms ok 217 + event_trigger 891 ms ok 218 - event_trigger_login 780 ms ok 219 - fast_default 596 ms ok 220 - tablespace 1097 ms 1..220 # 1 of 220 tests failed. # The differences that caused some tests to fail can be viewed in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.diffs". # A copy of the test summary that you see above is saved in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.out". === dumping C:\cirrus\build/testrun/recovery/027_stream_regress\data/regression.diffs === diff -w -U3 C:/cirrus/src/test/regress/expected/partition_join.out C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/partition_join.out --- C:/cirrus/src/test/regress/expected/partition_join.out 2024-03-27 23:06:12.260512500 +0000 +++ C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/partition_join.out 2024-03-27 23:11:55.036028800 +0000 @@ -511,24 +511,29 @@ (SELECT * FROM prt1 t2 TABLESAMPLE SYSTEM (t1.a) REPEATABLE(t1.b)) s ON t1.a = s.a; QUERY PLAN -------------------------------------------------------------- - Append +------------------------------------------------------------------------- + Gather + Workers Planned: 2 + -> Parallel Append -> Nested Loop - -> Seq Scan on prt1_p1 t1_1 + -> Parallel Seq Scan on prt1_p1 t1_1 + -> Materialize -> Sample Scan on prt1_p1 t2_1 Sampling: system (t1_1.a) REPEATABLE (t1_1.b) Filter: (t1_1.a = a) -> Nested Loop - -> Seq Scan on prt1_p2 t1_2 + -> Parallel Seq Scan on prt1_p2 t1_2 + -> Materialize -> Sample Scan on prt1_p2 t2_2 Sampling: system (t1_2.a) REPEATABLE (t1_2.b) Filter: (t1_2.a = a) -> Nested Loop - -> Seq Scan on prt1_p3 t1_3 + -> Parallel Seq Scan on prt1_p3 t1_3 + -> Materialize -> Sample Scan on prt1_p3 t2_3 Sampling: system (t1_3.a) REPEATABLE (t1_3.b) Filter: (t1_3.a = a) -(16 rows) +(21 rows) -- lateral reference in scan's restriction clauses EXPLAIN (COSTS OFF) @@ -2042,34 +2047,41 @@ (SELECT * FROM prt1_l t2 TABLESAMPLE SYSTEM (t1.a) REPEATABLE(t1.b)) s ON t1.a = s.a AND t1.b = s.b AND t1.c = s.c; QUERY PLAN ----------------------------------------------------------------------------------------- - Append +---------------------------------------------------------------------------------------------------- + Gather + Workers Planned: 2 + -> Parallel Append -> Nested Loop - -> Seq Scan on prt1_l_p1 t1_1 + -> Parallel Seq Scan on prt1_l_p1 t1_1 + -> Materialize -> Sample Scan on prt1_l_p1 t2_1 Sampling: system (t1_1.a) REPEATABLE (t1_1.b) Filter: ((t1_1.a = a) AND (t1_1.b = b) AND ((t1_1.c)::text = (c)::text)) -> Nested Loop - -> Seq Scan on prt1_l_p2_p1 t1_2 - -> Sample Scan on prt1_l_p2_p1 t2_2 - Sampling: system (t1_2.a) REPEATABLE (t1_2.b) - Filter: ((t1_2.a = a) AND (t1_2.b = b) AND ((t1_2.c)::text = (c)::text)) - -> Nested Loop - -> Seq Scan on prt1_l_p2_p2 t1_3 + -> Parallel Seq Scan on prt1_l_p2_p2 t1_3 + -> Materialize -> Sample Scan on prt1_l_p2_p2 t2_3 Sampling: system (t1_3.a) REPEATABLE (t1_3.b) Filter: ((t1_3.a = a) AND (t1_3.b = b) AND ((t1_3.c)::text = (c)::text)) -> Nested Loop - -> Seq Scan on prt1_l_p3_p1 t1_4 + -> Parallel Seq Scan on prt1_l_p2_p1 t1_2 + -> Materialize + -> Sample Scan on prt1_l_p2_p1 t2_2 + Sampling: system (t1_2.a) REPEATABLE (t1_2.b) + Filter: ((t1_2.a = a) AND (t1_2.b = b) AND ((t1_2.c)::text = (c)::text)) + -> Nested Loop + -> Parallel Seq Scan on prt1_l_p3_p1 t1_4 + -> Materialize -> Sample Scan on prt1_l_p3_p1 t2_4 Sampling: system (t1_4.a) REPEATABLE (t1_4.b) Filter: ((t1_4.a = a) AND (t1_4.b = b) AND ((t1_4.c)::text = (c)::text)) -> Nested Loop - -> Seq Scan on prt1_l_p3_p2 t1_5 + -> Parallel Seq Scan on prt1_l_p3_p2 t1_5 + -> Materialize -> Sample Scan on prt1_l_p3_p2 t2_5 Sampling: system (t1_5.a) REPEATABLE (t1_5.b) Filter: ((t1_5.a = a) AND (t1_5.b = b) AND ((t1_5.c)::text = (c)::text)) -(26 rows) +(33 rows) -- partitionwise join with lateral reference in scan's restriction clauses EXPLAIN (COSTS OFF) === EOF === [23:12:05.106](118.926s) not ok 2 - regression tests pass [23:12:05.106](0.000s) # Failed test 'regression tests pass' # at C:/cirrus/src/test/recovery/t/027_stream_regress.pl line 95. [23:12:05.107](0.000s) # got: '256' # expected: '0' 1 1 1 1 2 1 9 1 4001 5 41 5 3 4 3 4 4 1 32 1 1 1 6 104 2 1 5 1006 1 2 5 17 -2 33 34 1 46 1 1 1 1 -1 1 1 -1 -32768 32767 1 9 1 Waiting for replication conn standby_1's replay_lsn to pass 0/1434EF88 on primary done # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump --no-sync -p 50034 --no-unlogged-table-data [23:12:09.431](4.325s) ok 3 - dump primary server # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump --no-sync -p 50035 [23:12:14.000](4.569s) ok 4 - dump standby server # Running: diff C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump [23:12:14.367](0.367s) ok 5 - compare primary and standby dumps [23:12:15.173](0.806s) ok 6 - check contents of pg_stat_statements on regression database ### Stopping node "standby_1" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "standby_1" ### Stopping node "primary" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "primary" [23:12:16.001](0.828s) 1..6 [23:12:16.019](0.017s) # Looks like you failed 1 test of 6.