# Checking port 61266 # Found port 61266 Name: primary Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/archives Connection string: port=61266 host=C:/Windows/TEMP/4Z6LSg4CCx Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log [07:47:43.079](0.083s) # initializing database system by copying initdb template # Running: robocopy /E /NJS /NJH /NFL /NDL /NP C:/cirrus/build/tmp_install/initdb-template C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata # Running: C:\cirrus\build\src/test\regress\pg_regress.exe --config-auth C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata ### Starting node "primary" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log -o --cluster-name=primary start waiting for server to start.... done server started # Postmaster PID for node "primary" is 2596 (standby_1,) [07:47:44.749](1.670s) ok 1 - physical slot created on primary # Taking pg_basebackup my_backup from node "primary" # Running: pg_basebackup -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup/my_backup -h C:/Windows/TEMP/4Z6LSg4CCx -p 61266 --checkpoint fast --no-sync # Backup finished # Checking port 61267 # Found port 61267 Name: standby_1 Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/archives Connection string: port=61267 host=C:/Windows/TEMP/4Z6LSg4CCx Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log # Initializing node "standby_1" from backup "my_backup" of node "primary" ### Enabling streaming replication for node "standby_1" ### Starting node "standby_1" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log -o --cluster-name=standby_1 start waiting for server to start.... done server started # Postmaster PID for node "standby_1" is 4692 # using postmaster on C:/Windows/TEMP/4Z6LSg4CCx, port 61266 ok 1 - test_setup 743 ms # parallel group (20 tests): money pg_lsn regproc uuid enum int2 oid name varchar text float8 int8 float4 boolean char txid int4 bit rangetypes numeric ok 2 + boolean 1420 ms ok 3 + char 1419 ms ok 4 + name 1310 ms ok 5 + varchar 1311 ms ok 6 + text 1311 ms ok 7 + int2 1288 ms ok 8 + int4 1421 ms ok 9 + int8 1308 ms ok 10 + oid 1284 ms ok 11 + float4 1306 ms ok 12 + float8 1303 ms ok 13 + bit 1423 ms ok 14 + numeric 2421 ms ok 15 + txid 1402 ms ok 16 + uuid 1249 ms ok 17 + enum 1274 ms ok 18 + money 733 ms ok 19 + rangetypes 1485 ms ok 20 + pg_lsn 824 ms ok 21 + regproc 1087 ms # parallel group (20 tests): md5 timetz point lseg line date circle macaddr time interval path strings macaddr8 inet numerology box multirangetypes polygon timestamp timestamptz ok 22 + strings 1073 ms ok 23 + md5 491 ms ok 24 + numerology 1140 ms ok 25 + point 659 ms ok 26 + lseg 694 ms ok 27 + line 692 ms ok 28 + box 1346 ms ok 29 + path 1062 ms ok 30 + polygon 1622 ms ok 31 + circle 911 ms ok 32 + date 684 ms ok 33 + time 908 ms ok 34 + timetz 510 ms ok 35 + timestamp 1613 ms ok 36 + timestamptz 1620 ms ok 37 + interval 1035 ms ok 38 + inet 1091 ms ok 39 + macaddr 897 ms ok 40 + macaddr8 1086 ms ok 41 + multirangetypes 1588 ms # parallel group (12 tests): horology expressions xid mvcc regex type_sanity misc_sanity unicode opr_sanity tstypes comments geometry ok 42 + geometry 1148 ms ok 43 + horology 905 ms ok 44 + tstypes 1029 ms ok 45 + regex 972 ms ok 46 + type_sanity 971 ms ok 47 + opr_sanity 1004 ms ok 48 + misc_sanity 992 ms ok 49 + comments 1022 ms ok 50 + expressions 904 ms ok 51 + unicode 993 ms ok 52 + xid 933 ms ok 53 + mvcc 903 ms # parallel group (5 tests): copydml copyselect copy insert_conflict insert ok 54 + copy 723 ms ok 55 + copyselect 504 ms ok 56 + copydml 422 ms ok 57 + insert 1429 ms ok 58 + insert_conflict 768 ms # parallel group (7 tests): create_operator create_function_c create_type create_misc create_schema create_procedure create_table ok 59 + create_function_c 315 ms ok 60 + create_misc 751 ms ok 61 + create_operator 298 ms ok 62 + create_procedure 747 ms ok 63 + create_table 1721 ms ok 64 + create_type 743 ms ok 65 + create_schema 743 ms # parallel group (5 tests): index_including create_index_spgist create_view index_including_gist create_index ok 66 + create_index 3516 ms ok 67 + create_index_spgist 1914 ms ok 68 + create_view 1912 ms ok 69 + index_including 1849 ms ok 70 + index_including_gist 2003 ms # parallel group (16 tests): typed_table errors roleattributes infinite_recurse create_cast select drop_if_exists create_aggregate hash_func create_function_sql create_am constraints updatable_views vacuum inherit triggers ok 71 + create_aggregate 1029 ms ok 72 + create_function_sql 1286 ms ok 73 + create_cast 1019 ms ok 74 + constraints 2725 ms ok 75 + triggers 6410 ms ok 76 + select 1020 ms ok 77 + inherit 5072 ms ok 78 + typed_table 811 ms ok 79 + vacuum 3830 ms ok 80 + drop_if_exists 1016 ms ok 81 + updatable_views 3812 ms ok 82 + roleattributes 898 ms ok 83 + create_am 1718 ms ok 84 + hash_func 1021 ms ok 85 + errors 894 ms ok 86 + infinite_recurse 994 ms ok 87 - sanity_check 1399 ms # parallel group (20 tests): select_distinct_on transactions select_implicit prepared_xacts delete select_having select_into subselect random portals namespace arrays update case union select_distinct hash_index join aggregates btree_index ok 88 + select_into 2338 ms ok 89 + select_distinct 2728 ms ok 90 + select_distinct_on 1026 ms ok 91 + select_implicit 1530 ms ok 92 + select_having 1788 ms ok 93 + subselect 2390 ms ok 94 + union 2623 ms ok 95 + case 2621 ms ok 96 + join 3503 ms ok 97 + aggregates 4949 ms ok 98 + transactions 1448 ms ok 99 + random 2378 ms ok 100 + portals 2555 ms ok 101 + arrays 2609 ms ok 102 + btree_index 6361 ms ok 103 + hash_index 3068 ms ok 104 + update 2604 ms ok 105 + delete 1504 ms ok 106 + namespace 2600 ms ok 107 + prepared_xacts 1500 ms # parallel group (20 tests): password security_label drop_operator tablesample lock collate replica_identity groupingsets init_privs identity object_address generated spgist gin matview rowsecurity gist join_hash brin privileges ok 108 + brin 11773 ms ok 109 + gin 4184 ms ok 110 + gist 6131 ms ok 111 + spgist 3745 ms ok 112 + privileges 13691 ms ok 113 + init_privs 3054 ms ok 114 + security_label 1026 ms ok 115 + collate 3007 ms ok 116 + matview 4655 ms ok 117 + lock 2998 ms ok 118 + replica_identity 3044 ms ok 119 + rowsecurity 4691 ms ok 120 + object_address 3047 ms ok 121 + tablesample 1500 ms ok 122 + groupingsets 3042 ms ok 123 + drop_operator 1013 ms ok 124 + password 1011 ms ok 125 + identity 3037 ms ok 126 + generated 3715 ms ok 127 + join_hash 11631 ms # parallel group (2 tests): brin_bloom brin_multi ok 128 + brin_bloom 403 ms ok 129 + brin_multi 2883 ms # parallel group (18 tests): collate.icu.utf8 alter_operator alter_generic async misc misc_functions tidrangescan create_table_like without_overlaps create_role collate.utf8 tidscan tid sysviews dbsize tsrf incremental_sort merge ok 130 + create_table_like 1473 ms ok 131 + alter_generic 942 ms ok 132 + alter_operator 659 ms ok 133 + misc 1353 ms ok 134 + async 936 ms ok 135 + dbsize 1609 ms ok 136 + merge 1775 ms ok 137 + misc_functions 1394 ms ok 138 + sysviews 1586 ms ok 139 + tsrf 1600 ms ok 140 + tid 1581 ms ok 141 + tidscan 1545 ms ok 142 + tidrangescan 1381 ms ok 143 + collate.utf8 1484 ms ok 144 + collate.icu.utf8 632 ms ok 145 + incremental_sort 1750 ms ok 146 + create_role 1478 ms ok 147 + without_overlaps 1437 ms # parallel group (7 tests): collate.linux.utf8 psql_crosstab collate.windows.win1252 amutils rules psql stats_ext ok 148 + rules 2795 ms ok 149 + psql 2899 ms ok 150 + psql_crosstab 1027 ms ok 151 + amutils 1390 ms ok 152 + stats_ext 6343 ms ok 153 + collate.linux.utf8 988 ms ok 154 + collate.windows.win1252 1150 ms ok 155 - select_parallel 7305 ms ok 156 - write_parallel 886 ms ok 157 - vacuum_parallel 468 ms # parallel group (2 tests): subscription publication ok 158 + publication 1323 ms ok 159 + subscription 456 ms # parallel group (17 tests): combocid advisory_lock xmlmap dependency portals_p2 tsdicts functional_deps guc equivclass bitmapops select_views tsearch window cluster indirect_toast foreign_data foreign_key ok 160 + select_views 2097 ms ok 161 + portals_p2 1459 ms ok 162 + foreign_key 5059 ms ok 163 + cluster 2330 ms ok 164 + dependency 1453 ms ok 165 + guc 1711 ms ok 166 + bitmapops 2084 ms ok 167 + combocid 568 ms ok 168 + tsearch 2189 ms ok 169 + tsdicts 1446 ms ok 170 + foreign_data 3745 ms ok 171 + window 2186 ms ok 172 + xmlmap 1149 ms ok 173 + functional_deps 1698 ms ok 174 + advisory_lock 1146 ms ok 175 + indirect_toast 3126 ms ok 176 + equivclass 1855 ms # parallel group (8 tests): jsonpath_encoding jsonpath json_encoding sqljson jsonb_jsonpath sqljson_queryfuncs json jsonb ok 177 + json 1406 ms ok 178 + jsonb 1695 ms ok 179 + json_encoding 396 ms ok 180 + jsonpath 394 ms ok 181 + jsonpath_encoding 389 ms ok 182 + jsonb_jsonpath 1162 ms ok 183 + sqljson 551 ms ok 184 + sqljson_queryfuncs 1314 ms # parallel group (18 tests): limit plancache prepare returning rangefuncs sequence conversion polymorphism truncate rowtypes copy2 with largeobject domain xml temp plpgsql alter_table ok 185 + plancache 1382 ms ok 186 + limit 1259 ms ok 187 + plpgsql 4043 ms ok 188 + copy2 2696 ms ok 189 + temp 2989 ms ok 190 + domain 2814 ms ok 191 + rangefuncs 2424 ms ok 192 + prepare 1675 ms ok 193 + conversion 2478 ms ok 194 + truncate 2638 ms ok 195 + alter_table 6322 ms ok 196 + sequence 2417 ms ok 197 + polymorphism 2473 ms ok 198 + rowtypes 2680 ms ok 199 + returning 1985 ms ok 200 + largeobject 2799 ms ok 201 + with 2789 ms ok 202 + xml 2901 ms # parallel group (13 tests): predicate compression partition_info reloptions hash_part explain memoize indexing partition_join stats partition_aggregate tuplesort partition_prune not ok 203 + partition_join 3707 ms ok 204 + partition_prune 8521 ms ok 205 + reloptions 2890 ms ok 206 + hash_part 2908 ms ok 207 + indexing 3654 ms ok 208 + partition_aggregate 5863 ms ok 209 + partition_info 2146 ms ok 210 + tuplesort 5966 ms ok 211 + explain 2900 ms ok 212 + compression 1826 ms ok 213 + memoize 3068 ms ok 214 + stats 4635 ms ok 215 + predicate 1052 ms # parallel group (2 tests): oidjoins event_trigger ok 216 + oidjoins 899 ms ok 217 + event_trigger 929 ms ok 218 - event_trigger_login 313 ms ok 219 - fast_default 747 ms ok 220 - tablespace 1189 ms 1..220 # 1 of 220 tests failed. # The differences that caused some tests to fail can be viewed in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.diffs". # A copy of the test summary that you see above is saved in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.out". === dumping C:\cirrus\build/testrun/recovery/027_stream_regress\data/regression.diffs === diff -w -U3 C:/cirrus/src/test/regress/expected/partition_join.out C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/partition_join.out --- C:/cirrus/src/test/regress/expected/partition_join.out 2024-03-22 07:43:26.038038400 +0000 +++ C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/partition_join.out 2024-03-22 07:49:45.987619700 +0000 @@ -511,24 +511,29 @@ (SELECT * FROM prt1 t2 TABLESAMPLE SYSTEM (t1.a) REPEATABLE(t1.b)) s ON t1.a = s.a; QUERY PLAN -------------------------------------------------------------- - Append +------------------------------------------------------------------------- + Gather + Workers Planned: 2 + -> Parallel Append -> Nested Loop - -> Seq Scan on prt1_p1 t1_1 + -> Parallel Seq Scan on prt1_p1 t1_1 + -> Materialize -> Sample Scan on prt1_p1 t2_1 Sampling: system (t1_1.a) REPEATABLE (t1_1.b) Filter: (t1_1.a = a) -> Nested Loop - -> Seq Scan on prt1_p2 t1_2 + -> Parallel Seq Scan on prt1_p2 t1_2 + -> Materialize -> Sample Scan on prt1_p2 t2_2 Sampling: system (t1_2.a) REPEATABLE (t1_2.b) Filter: (t1_2.a = a) -> Nested Loop - -> Seq Scan on prt1_p3 t1_3 + -> Parallel Seq Scan on prt1_p3 t1_3 + -> Materialize -> Sample Scan on prt1_p3 t2_3 Sampling: system (t1_3.a) REPEATABLE (t1_3.b) Filter: (t1_3.a = a) -(16 rows) +(21 rows) -- lateral reference in scan's restriction clauses EXPLAIN (COSTS OFF) @@ -2042,34 +2047,41 @@ (SELECT * FROM prt1_l t2 TABLESAMPLE SYSTEM (t1.a) REPEATABLE(t1.b)) s ON t1.a = s.a AND t1.b = s.b AND t1.c = s.c; QUERY PLAN ----------------------------------------------------------------------------------------- - Append +---------------------------------------------------------------------------------------------------- + Gather + Workers Planned: 2 + -> Parallel Append -> Nested Loop - -> Seq Scan on prt1_l_p1 t1_1 + -> Parallel Seq Scan on prt1_l_p1 t1_1 + -> Materialize -> Sample Scan on prt1_l_p1 t2_1 Sampling: system (t1_1.a) REPEATABLE (t1_1.b) Filter: ((t1_1.a = a) AND (t1_1.b = b) AND ((t1_1.c)::text = (c)::text)) -> Nested Loop - -> Seq Scan on prt1_l_p2_p1 t1_2 - -> Sample Scan on prt1_l_p2_p1 t2_2 - Sampling: system (t1_2.a) REPEATABLE (t1_2.b) - Filter: ((t1_2.a = a) AND (t1_2.b = b) AND ((t1_2.c)::text = (c)::text)) - -> Nested Loop - -> Seq Scan on prt1_l_p2_p2 t1_3 + -> Parallel Seq Scan on prt1_l_p2_p2 t1_3 + -> Materialize -> Sample Scan on prt1_l_p2_p2 t2_3 Sampling: system (t1_3.a) REPEATABLE (t1_3.b) Filter: ((t1_3.a = a) AND (t1_3.b = b) AND ((t1_3.c)::text = (c)::text)) -> Nested Loop - -> Seq Scan on prt1_l_p3_p1 t1_4 + -> Parallel Seq Scan on prt1_l_p2_p1 t1_2 + -> Materialize + -> Sample Scan on prt1_l_p2_p1 t2_2 + Sampling: system (t1_2.a) REPEATABLE (t1_2.b) + Filter: ((t1_2.a = a) AND (t1_2.b = b) AND ((t1_2.c)::text = (c)::text)) + -> Nested Loop + -> Parallel Seq Scan on prt1_l_p3_p1 t1_4 + -> Materialize -> Sample Scan on prt1_l_p3_p1 t2_4 Sampling: system (t1_4.a) REPEATABLE (t1_4.b) Filter: ((t1_4.a = a) AND (t1_4.b = b) AND ((t1_4.c)::text = (c)::text)) -> Nested Loop - -> Seq Scan on prt1_l_p3_p2 t1_5 + -> Parallel Seq Scan on prt1_l_p3_p2 t1_5 + -> Materialize -> Sample Scan on prt1_l_p3_p2 t2_5 Sampling: system (t1_5.a) REPEATABLE (t1_5.b) Filter: ((t1_5.a = a) AND (t1_5.b = b) AND ((t1_5.c)::text = (c)::text)) -(26 rows) +(33 rows) -- partitionwise join with lateral reference in scan's restriction clauses EXPLAIN (COSTS OFF) === EOF === [07:49:57.646](132.897s) not ok 2 - regression tests pass [07:49:57.661](0.016s) # Failed test 'regression tests pass' # at C:/cirrus/src/test/recovery/t/027_stream_regress.pl line 95. [07:49:57.662](0.000s) # got: '256' # expected: '0' 1 1 1 2 1 1 9 1 5 5 3 4 3 4 4 1 32 1 1 1 6 104 2 1 5 1006 1 2 4001 41 1 5 17 -2 1 33 34 9 1 1 1 1 -1 1 1 -1 -32768 32767 1 46 Waiting for replication conn standby_1's replay_lsn to pass 0/143C7690 on primary done # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump --no-sync -p 61266 --no-unlogged-table-data [07:50:04.518](6.857s) ok 3 - dump primary server # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump --no-sync -p 61267 [07:50:10.626](6.107s) ok 4 - dump standby server # Running: diff C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump [07:50:10.797](0.172s) ok 5 - compare primary and standby dumps [07:50:11.534](0.737s) ok 6 - check contents of pg_stat_statements on regression database ### Stopping node "standby_1" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "standby_1" ### Stopping node "primary" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "primary" [07:50:12.302](0.767s) 1..6 [07:50:12.397](0.095s) # Looks like you failed 1 test of 6.