# Checking port 58901 # Found port 58901 Name: primary Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/archives Connection string: port=58901 host=C:/Windows/TEMP/EjCZlKNfDo Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log [05:09:31.108](0.059s) # initializing database system by copying initdb template # Running: robocopy /E /NJS /NJH /NFL /NDL /NP C:/cirrus/build/tmp_install/initdb-template C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata # Running: C:\cirrus\build\src/test\regress\pg_regress.exe --config-auth C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata ### Starting node "primary" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log -o --cluster-name=primary start waiting for server to start.... done server started # Postmaster PID for node "primary" is 1412 (standby_1,) [05:09:32.551](1.443s) ok 1 - physical slot created on primary # Taking pg_basebackup my_backup from node "primary" # Running: pg_basebackup -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup/my_backup -h C:/Windows/TEMP/EjCZlKNfDo -p 58901 --checkpoint fast --no-sync # Backup finished # Checking port 58902 # Found port 58902 Name: standby_1 Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/archives Connection string: port=58902 host=C:/Windows/TEMP/EjCZlKNfDo Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log # Initializing node "standby_1" from backup "my_backup" of node "primary" ### Enabling streaming replication for node "standby_1" ### Starting node "standby_1" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log -o --cluster-name=standby_1 start waiting for server to start.... done server started # Postmaster PID for node "standby_1" is 7068 # using postmaster on C:/Windows/TEMP/EjCZlKNfDo, port 58901 ok 1 - test_setup 770 ms # parallel group (20 tests): boolean char name text oid float4 varchar int2 int8 regproc float8 pg_lsn txid bit uuid money int4 enum rangetypes numeric ok 2 + boolean 474 ms ok 3 + char 473 ms ok 4 + name 471 ms ok 5 + varchar 753 ms ok 6 + text 739 ms ok 7 + int2 750 ms ok 8 + int4 1082 ms ok 9 + int8 767 ms ok 10 + oid 732 ms ok 11 + float4 742 ms ok 12 + float8 834 ms ok 13 + bit 888 ms ok 14 + numeric 2035 ms ok 15 + txid 832 ms ok 16 + uuid 919 ms ok 17 + enum 1086 ms ok 18 + money 916 ms ok 19 + rangetypes 1827 ms ok 20 + pg_lsn 822 ms ok 21 + regproc 800 ms # parallel group (20 tests): md5 date time timetz circle macaddr point inet numerology strings macaddr8 lseg line path timestamp polygon interval box multirangetypes timestamptz ok 22 + strings 937 ms ok 23 + md5 556 ms ok 24 + numerology 924 ms ok 25 + point 891 ms ok 26 + lseg 932 ms ok 27 + line 930 ms ok 28 + box 1128 ms ok 29 + path 946 ms ok 30 + polygon 944 ms ok 31 + circle 739 ms ok 32 + date 571 ms ok 33 + time 658 ms ok 34 + timetz 656 ms ok 35 + timestamp 937 ms ok 36 + timestamptz 1392 ms ok 37 + interval 1036 ms ok 38 + inet 905 ms ok 39 + macaddr 726 ms ok 40 + macaddr8 911 ms ok 41 + multirangetypes 1243 ms # parallel group (12 tests): tstypes xid horology unicode geometry expressions regex comments misc_sanity type_sanity mvcc opr_sanity ok 42 + geometry 921 ms ok 43 + horology 881 ms ok 44 + tstypes 485 ms ok 45 + regex 1029 ms ok 46 + type_sanity 1120 ms ok 47 + opr_sanity 1391 ms ok 48 + misc_sanity 1105 ms ok 49 + comments 1099 ms ok 50 + expressions 908 ms ok 51 + unicode 906 ms ok 52 + xid 725 ms ok 53 + mvcc 1147 ms # parallel group (5 tests): copyselect copydml copy insert_conflict insert ok 54 + copy 681 ms ok 55 + copyselect 512 ms ok 56 + copydml 677 ms ok 57 + insert 1743 ms ok 58 + insert_conflict 870 ms # parallel group (7 tests): create_function_c create_type create_misc create_procedure create_schema create_operator create_table ok 59 + create_function_c 528 ms ok 60 + create_misc 545 ms ok 61 + create_operator 739 ms ok 62 + create_procedure 573 ms ok 63 + create_table 1256 ms ok 64 + create_type 538 ms ok 65 + create_schema 569 ms # parallel group (5 tests): create_view index_including create_index_spgist index_including_gist create_index ok 66 + create_index 2342 ms ok 67 + create_index_spgist 1186 ms ok 68 + create_view 823 ms ok 69 + index_including 1175 ms ok 70 + index_including_gist 1355 ms # parallel group (16 tests): create_aggregate typed_table errors infinite_recurse select create_cast roleattributes create_function_sql drop_if_exists hash_func create_am constraints vacuum updatable_views inherit triggers ok 71 + create_aggregate 446 ms ok 72 + create_function_sql 938 ms ok 73 + create_cast 927 ms ok 74 + constraints 2654 ms ok 75 + triggers 4833 ms ok 76 + select 923 ms ok 77 + inherit 4085 ms ok 78 + typed_table 506 ms ok 79 + vacuum 3125 ms ok 80 + drop_if_exists 953 ms ok 81 + updatable_views 3289 ms ok 82 + roleattributes 922 ms ok 83 + create_am 1520 ms ok 84 + hash_func 990 ms ok 85 + errors 827 ms ok 86 + infinite_recurse 826 ms ok 87 - sanity_check 836 ms # parallel group (20 tests): select_implicit delete select_into select_distinct_on select_having random namespace subselect select_distinct case transactions portals prepared_xacts union arrays update hash_index join aggregates btree_index ok 88 + select_into 823 ms ok 89 + select_distinct 1686 ms ok 90 + select_distinct_on 820 ms ok 91 + select_implicit 760 ms ok 92 + select_having 916 ms ok 93 + subselect 1047 ms ok 94 + union 2060 ms ok 95 + case 1678 ms ok 96 + join 4145 ms ok 97 + aggregates 4849 ms ok 98 + transactions 1874 ms ok 99 + random 980 ms ok 100 + portals 1985 ms ok 101 + arrays 2380 ms ok 102 + btree_index 5756 ms ok 103 + hash_index 3284 ms ok 104 + update 3269 ms ok 105 + delete 739 ms ok 106 + namespace 1023 ms ok 107 + prepared_xacts 2034 ms # parallel group (20 tests): drop_operator init_privs security_label lock object_address replica_identity collate password matview identity generated groupingsets tablesample spgist gin rowsecurity gist join_hash brin privileges ok 108 + brin 11460 ms ok 109 + gin 4309 ms ok 110 + gist 5623 ms ok 111 + spgist 4246 ms ok 112 + privileges 13701 ms ok 113 + init_privs 1831 ms ok 114 + security_label 1830 ms ok 115 + collate 2631 ms ok 116 + matview 3424 ms ok 117 + lock 1828 ms ok 118 + replica_identity 2319 ms ok 119 + rowsecurity 4710 ms ok 120 + object_address 2316 ms ok 121 + tablesample 3880 ms ok 122 + groupingsets 3878 ms ok 123 + drop_operator 1817 ms ok 124 + password 2809 ms ok 125 + identity 3869 ms ok 126 + generated 3872 ms ok 127 + join_hash 11399 ms # parallel group (2 tests): brin_bloom brin_multi ok 128 + brin_bloom 322 ms ok 129 + brin_multi 1647 ms # parallel group (18 tests): tsrf tidscan tid dbsize collate.icu.utf8 async collate.utf8 alter_operator sysviews tidrangescan misc create_role misc_functions alter_generic incremental_sort without_overlaps create_table_like merge ok 130 + create_table_like 1396 ms ok 131 + alter_generic 1014 ms ok 132 + alter_operator 820 ms ok 133 + misc 901 ms ok 134 + async 675 ms ok 135 + dbsize 465 ms ok 136 + merge 1409 ms ok 137 + misc_functions 994 ms ok 138 + sysviews 811 ms ok 139 + tsrf 430 ms ok 140 + tid 454 ms ok 141 + tidscan 453 ms ok 142 + tidrangescan 887 ms ok 143 + collate.utf8 661 ms ok 144 + collate.icu.utf8 455 ms ok 145 + incremental_sort 993 ms ok 146 + create_role 977 ms ok 147 + without_overlaps 1108 ms # parallel group (7 tests): collate.linux.utf8 amutils psql_crosstab collate.windows.win1252 rules psql stats_ext ok 148 + rules 1608 ms ok 149 + psql 1669 ms ok 150 + psql_crosstab 453 ms ok 151 + amutils 451 ms ok 152 + stats_ext 4798 ms ok 153 + collate.linux.utf8 430 ms ok 154 + collate.windows.win1252 552 ms ok 155 - select_parallel 7253 ms ok 156 - write_parallel 1177 ms ok 157 - vacuum_parallel 451 ms # parallel group (2 tests): subscription publication ok 158 + publication 1553 ms ok 159 + subscription 365 ms # parallel group (17 tests): portals_p2 xmlmap tsdicts combocid dependency select_views equivclass cluster advisory_lock tsearch functional_deps guc window bitmapops foreign_data indirect_toast foreign_key ok 160 + select_views 2065 ms ok 161 + portals_p2 1168 ms ok 162 + foreign_key 4157 ms ok 163 + cluster 2060 ms ok 164 + dependency 2058 ms ok 165 + guc 2377 ms ok 166 + bitmapops 2451 ms ok 167 + combocid 1996 ms ok 168 + tsearch 2052 ms ok 169 + tsdicts 1794 ms ok 170 + foreign_data 2582 ms ok 171 + window 2409 ms ok 172 + xmlmap 1790 ms ok 173 + functional_deps 2113 ms ok 174 + advisory_lock 2043 ms ok 175 + indirect_toast 2704 ms ok 176 + equivclass 2040 ms # parallel group (8 tests): jsonpath_encoding json_encoding jsonpath sqljson jsonb_jsonpath sqljson_queryfuncs json jsonb ok 177 + json 843 ms ok 178 + jsonb 1012 ms ok 179 + json_encoding 345 ms ok 180 + jsonpath 344 ms ok 181 + jsonpath_encoding 342 ms ok 182 + jsonb_jsonpath 676 ms ok 183 + sqljson 664 ms ok 184 + sqljson_queryfuncs 777 ms # parallel group (18 tests): plancache prepare xml returning conversion limit rowtypes temp largeobject truncate sequence rangefuncs copy2 polymorphism with domain plpgsql alter_table ok 185 + plancache 524 ms ok 186 + limit 1303 ms ok 187 + plpgsql 3182 ms ok 188 + copy2 1806 ms ok 189 + temp 1438 ms ok 190 + domain 2073 ms ok 191 + rangefuncs 1631 ms ok 192 + prepare 531 ms ok 193 + conversion 876 ms ok 194 + truncate 1567 ms ok 195 + alter_table 5098 ms ok 196 + sequence 1624 ms ok 197 + polymorphism 1794 ms ok 198 + rowtypes 1385 ms ok 199 + returning 867 ms ok 200 + largeobject 1435 ms ok 201 + with 1994 ms ok 202 + xml 774 ms # parallel group (13 tests): predicate reloptions hash_part compression partition_info memoize explain partition_join indexing stats tuplesort partition_aggregate partition_prune not ok 203 + partition_join 2972 ms ok 204 + partition_prune 6281 ms ok 205 + reloptions 1376 ms ok 206 + hash_part 1407 ms ok 207 + indexing 2972 ms ok 208 + partition_aggregate 4959 ms ok 209 + partition_info 1591 ms ok 210 + tuplesort 4670 ms ok 211 + explain 2034 ms ok 212 + compression 1399 ms ok 213 + memoize 1597 ms ok 214 + stats 3575 ms ok 215 + predicate 1271 ms # parallel group (2 tests): oidjoins event_trigger ok 216 + oidjoins 830 ms ok 217 + event_trigger 828 ms ok 218 - event_trigger_login 1210 ms ok 219 - fast_default 997 ms ok 220 - tablespace 1274 ms 1..220 # 1 of 220 tests failed. # The differences that caused some tests to fail can be viewed in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.diffs". # A copy of the test summary that you see above is saved in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.out". === dumping C:\cirrus\build/testrun/recovery/027_stream_regress\data/regression.diffs === diff -w -U3 C:/cirrus/src/test/regress/expected/partition_join.out C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/partition_join.out --- C:/cirrus/src/test/regress/expected/partition_join.out 2024-03-24 05:05:38.860927900 +0000 +++ C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/partition_join.out 2024-03-24 05:11:06.790979800 +0000 @@ -511,24 +511,29 @@ (SELECT * FROM prt1 t2 TABLESAMPLE SYSTEM (t1.a) REPEATABLE(t1.b)) s ON t1.a = s.a; QUERY PLAN -------------------------------------------------------------- - Append +------------------------------------------------------------------------- + Gather + Workers Planned: 2 + -> Parallel Append -> Nested Loop - -> Seq Scan on prt1_p1 t1_1 + -> Parallel Seq Scan on prt1_p1 t1_1 + -> Materialize -> Sample Scan on prt1_p1 t2_1 Sampling: system (t1_1.a) REPEATABLE (t1_1.b) Filter: (t1_1.a = a) -> Nested Loop - -> Seq Scan on prt1_p2 t1_2 + -> Parallel Seq Scan on prt1_p2 t1_2 + -> Materialize -> Sample Scan on prt1_p2 t2_2 Sampling: system (t1_2.a) REPEATABLE (t1_2.b) Filter: (t1_2.a = a) -> Nested Loop - -> Seq Scan on prt1_p3 t1_3 + -> Parallel Seq Scan on prt1_p3 t1_3 + -> Materialize -> Sample Scan on prt1_p3 t2_3 Sampling: system (t1_3.a) REPEATABLE (t1_3.b) Filter: (t1_3.a = a) -(16 rows) +(21 rows) -- lateral reference in scan's restriction clauses EXPLAIN (COSTS OFF) @@ -2042,34 +2047,41 @@ (SELECT * FROM prt1_l t2 TABLESAMPLE SYSTEM (t1.a) REPEATABLE(t1.b)) s ON t1.a = s.a AND t1.b = s.b AND t1.c = s.c; QUERY PLAN ----------------------------------------------------------------------------------------- - Append +---------------------------------------------------------------------------------------------------- + Gather + Workers Planned: 2 + -> Parallel Append -> Nested Loop - -> Seq Scan on prt1_l_p1 t1_1 + -> Parallel Seq Scan on prt1_l_p1 t1_1 + -> Materialize -> Sample Scan on prt1_l_p1 t2_1 Sampling: system (t1_1.a) REPEATABLE (t1_1.b) Filter: ((t1_1.a = a) AND (t1_1.b = b) AND ((t1_1.c)::text = (c)::text)) -> Nested Loop - -> Seq Scan on prt1_l_p2_p1 t1_2 - -> Sample Scan on prt1_l_p2_p1 t2_2 - Sampling: system (t1_2.a) REPEATABLE (t1_2.b) - Filter: ((t1_2.a = a) AND (t1_2.b = b) AND ((t1_2.c)::text = (c)::text)) - -> Nested Loop - -> Seq Scan on prt1_l_p2_p2 t1_3 + -> Parallel Seq Scan on prt1_l_p2_p2 t1_3 + -> Materialize -> Sample Scan on prt1_l_p2_p2 t2_3 Sampling: system (t1_3.a) REPEATABLE (t1_3.b) Filter: ((t1_3.a = a) AND (t1_3.b = b) AND ((t1_3.c)::text = (c)::text)) -> Nested Loop - -> Seq Scan on prt1_l_p3_p1 t1_4 + -> Parallel Seq Scan on prt1_l_p2_p1 t1_2 + -> Materialize + -> Sample Scan on prt1_l_p2_p1 t2_2 + Sampling: system (t1_2.a) REPEATABLE (t1_2.b) + Filter: ((t1_2.a = a) AND (t1_2.b = b) AND ((t1_2.c)::text = (c)::text)) + -> Nested Loop + -> Parallel Seq Scan on prt1_l_p3_p1 t1_4 + -> Materialize -> Sample Scan on prt1_l_p3_p1 t2_4 Sampling: system (t1_4.a) REPEATABLE (t1_4.b) Filter: ((t1_4.a = a) AND (t1_4.b = b) AND ((t1_4.c)::text = (c)::text)) -> Nested Loop - -> Seq Scan on prt1_l_p3_p2 t1_5 + -> Parallel Seq Scan on prt1_l_p3_p2 t1_5 + -> Materialize -> Sample Scan on prt1_l_p3_p2 t2_5 Sampling: system (t1_5.a) REPEATABLE (t1_5.b) Filter: ((t1_5.a = a) AND (t1_5.b = b) AND ((t1_5.c)::text = (c)::text)) -(26 rows) +(33 rows) -- partitionwise join with lateral reference in scan's restriction clauses EXPLAIN (COSTS OFF) === EOF === [05:11:18.060](105.509s) not ok 2 - regression tests pass [05:11:18.060](0.000s) # Failed test 'regression tests pass' # at C:/cirrus/src/test/recovery/t/027_stream_regress.pl line 95. [05:11:18.060](0.000s) # got: '256' # expected: '0' 1 1 1 1 2 1 1 9 5 5 3 4 3 4 4 4001 1 32 1 1 1 6 104 2 1 5 1006 1 2 41 5 17 -2 33 34 1 9 1 1 1 1 1 1 -1 1 1 -1 -32768 32767 46 Waiting for replication conn standby_1's replay_lsn to pass 0/145EF3B0 on primary done # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump --no-sync -p 58901 --no-unlogged-table-data [05:11:22.899](4.840s) ok 3 - dump primary server # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump --no-sync -p 58902 [05:11:27.365](4.466s) ok 4 - dump standby server # Running: diff C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump [05:11:27.530](0.165s) ok 5 - compare primary and standby dumps [05:11:28.054](0.523s) ok 6 - check contents of pg_stat_statements on regression database ### Stopping node "standby_1" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "standby_1" ### Stopping node "primary" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "primary" [05:11:28.522](0.469s) 1..6 [05:11:28.539](0.017s) # Looks like you failed 1 test of 6.