# Checking port 55166 # Found port 55166 Name: primary Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/archives Connection string: port=55166 host=C:/Windows/TEMP/D93TpAX7X3 Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log [02:19:30.430](0.066s) # initializing database system by copying initdb template # Running: robocopy /E /NJS /NJH /NFL /NDL /NP C:/cirrus/build/tmp_install/initdb-template C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata # Running: C:\cirrus\build\src/test\regress\pg_regress.exe --config-auth C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata ### Starting node "primary" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log -o --cluster-name=primary start waiting for server to start.... done server started # Postmaster PID for node "primary" is 3804 (standby_1,) [02:19:32.336](1.906s) ok 1 - physical slot created on primary # Taking pg_basebackup my_backup from node "primary" # Running: pg_basebackup -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup/my_backup -h C:/Windows/TEMP/D93TpAX7X3 -p 55166 --checkpoint fast --no-sync # Backup finished # Checking port 55167 # Found port 55167 Name: standby_1 Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/archives Connection string: port=55167 host=C:/Windows/TEMP/D93TpAX7X3 Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log # Initializing node "standby_1" from backup "my_backup" of node "primary" ### Enabling streaming replication for node "standby_1" ### Starting node "standby_1" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log -o --cluster-name=standby_1 start waiting for server to start.... done server started # Postmaster PID for node "standby_1" is 1732 # using postmaster on C:/Windows/TEMP/D93TpAX7X3, port 55166 ok 1 - test_setup 769 ms # parallel group (20 tests): float8 int4 int8 name int2 txid float4 oid bit varchar char pg_lsn uuid boolean text regproc enum money rangetypes numeric ok 2 + boolean 1034 ms ok 3 + char 872 ms ok 4 + name 636 ms ok 5 + varchar 830 ms ok 6 + text 1039 ms ok 7 + int2 633 ms ok 8 + int4 539 ms ok 9 + int8 621 ms ok 10 + oid 777 ms ok 11 + float4 671 ms ok 12 + float8 532 ms ok 13 + bit 777 ms ok 14 + numeric 1792 ms ok 15 + txid 665 ms ok 16 + uuid 968 ms ok 17 + enum 1090 ms ok 18 + money 1094 ms ok 19 + rangetypes 1489 ms ok 20 + pg_lsn 955 ms ok 21 + regproc 1047 ms # parallel group (20 tests): md5 numerology macaddr strings macaddr8 point lseg path line timestamp date circle timestamptz timetz box time polygon multirangetypes inet interval ok 22 + strings 1122 ms ok 23 + md5 338 ms ok 24 + numerology 336 ms ok 25 + point 1118 ms ok 26 + lseg 1116 ms ok 27 + line 1115 ms ok 28 + box 1227 ms ok 29 + path 1112 ms ok 30 + polygon 1234 ms ok 31 + circle 1126 ms ok 32 + date 1125 ms ok 33 + time 1229 ms ok 34 + timetz 1217 ms ok 35 + timestamp 1120 ms ok 36 + timestamptz 1213 ms ok 37 + interval 1381 ms ok 38 + inet 1367 ms ok 39 + macaddr 381 ms ok 40 + macaddr8 1095 ms ok 41 + multirangetypes 1346 ms # parallel group (12 tests): tstypes horology misc_sanity unicode comments geometry regex type_sanity xid mvcc expressions opr_sanity ok 42 + geometry 894 ms ok 43 + horology 748 ms ok 44 + tstypes 746 ms ok 45 + regex 890 ms ok 46 + type_sanity 888 ms ok 47 + opr_sanity 949 ms ok 48 + misc_sanity 789 ms ok 49 + comments 883 ms ok 50 + expressions 933 ms ok 51 + unicode 855 ms ok 52 + xid 884 ms ok 53 + mvcc 892 ms # parallel group (5 tests): copydml copyselect insert_conflict copy insert ok 54 + copy 769 ms ok 55 + copyselect 319 ms ok 56 + copydml 288 ms ok 57 + insert 1176 ms ok 58 + insert_conflict 758 ms # parallel group (7 tests): create_schema create_function_c create_misc create_operator create_procedure create_type create_table ok 59 + create_function_c 350 ms ok 60 + create_misc 437 ms ok 61 + create_operator 496 ms ok 62 + create_procedure 512 ms ok 63 + create_table 1171 ms ok 64 + create_type 510 ms ok 65 + create_schema 341 ms # parallel group (5 tests): index_including create_view index_including_gist create_index_spgist create_index ok 66 + create_index 2592 ms ok 67 + create_index_spgist 1381 ms ok 68 + create_view 1350 ms ok 69 + index_including 1312 ms ok 70 + index_including_gist 1348 ms # parallel group (16 tests): select roleattributes errors hash_func infinite_recurse create_aggregate create_cast typed_table drop_if_exists create_am create_function_sql constraints vacuum updatable_views inherit triggers ok 71 + create_aggregate 698 ms ok 72 + create_function_sql 1803 ms ok 73 + create_cast 865 ms ok 74 + constraints 2126 ms ok 75 + triggers 4022 ms ok 76 + select 539 ms ok 77 + inherit 3019 ms ok 78 + typed_table 859 ms ok 79 + vacuum 2588 ms ok 80 + drop_if_exists 1175 ms ok 81 + updatable_views 2685 ms ok 82 + roleattributes 531 ms ok 83 + create_am 1778 ms ok 84 + hash_func 528 ms ok 85 + errors 527 ms ok 86 + infinite_recurse 597 ms ok 87 - sanity_check 1191 ms # parallel group (20 tests): select_into namespace select_distinct_on select_having random delete subselect select_implicit portals case select_distinct prepared_xacts arrays transactions union hash_index update join aggregates btree_index ok 88 + select_into 891 ms ok 89 + select_distinct 1548 ms ok 90 + select_distinct_on 1052 ms ok 91 + select_implicit 1290 ms ok 92 + select_having 1048 ms ok 93 + subselect 1257 ms ok 94 + union 2452 ms ok 95 + case 1535 ms ok 96 + join 3849 ms ok 97 + aggregates 4419 ms ok 98 + transactions 2444 ms ok 99 + random 1038 ms ok 100 + portals 1528 ms ok 101 + arrays 2331 ms ok 102 + btree_index 6039 ms ok 103 + hash_index 3347 ms ok 104 + update 3468 ms ok 105 + delete 1149 ms ok 106 + namespace 921 ms ok 107 + prepared_xacts 1791 ms # parallel group (20 tests): init_privs security_label tablesample lock password drop_operator matview replica_identity spgist object_address collate groupingsets identity gin generated rowsecurity gist join_hash brin privileges ok 108 + brin 11573 ms ok 109 + gin 4240 ms ok 110 + gist 5138 ms ok 111 + spgist 3509 ms ok 112 + privileges 13886 ms ok 113 + init_privs 850 ms ok 114 + security_label 883 ms ok 115 + collate 3709 ms ok 116 + matview 3219 ms ok 117 + lock 880 ms ok 118 + replica_identity 3218 ms ok 119 + rowsecurity 4671 ms ok 120 + object_address 3496 ms ok 121 + tablesample 874 ms ok 122 + groupingsets 3749 ms ok 123 + drop_operator 3209 ms ok 124 + password 2997 ms ok 125 + identity 3797 ms ok 126 + generated 4658 ms ok 127 + join_hash 11492 ms # parallel group (2 tests): brin_bloom brin_multi ok 128 + brin_bloom 435 ms ok 129 + brin_multi 2599 ms # parallel group (17 tests): tid collate.icu.utf8 sysviews create_role alter_operator tidrangescan tidscan dbsize async alter_generic tsrf misc_functions misc incremental_sort create_table_like without_overlaps merge ok 130 + create_table_like 1382 ms ok 131 + alter_generic 1271 ms ok 132 + alter_operator 898 ms ok 133 + misc 1369 ms ok 134 + async 1188 ms ok 135 + dbsize 940 ms ok 136 + merge 1821 ms ok 137 + misc_functions 1362 ms ok 138 + sysviews 836 ms ok 139 + tsrf 1257 ms ok 140 + tid 718 ms ok 141 + tidscan 931 ms ok 142 + tidrangescan 881 ms ok 143 + collate.icu.utf8 829 ms ok 144 + incremental_sort 1351 ms ok 145 + create_role 876 ms ok 146 + without_overlaps 1424 ms # parallel group (7 tests): collate.linux.utf8 amutils psql_crosstab collate.windows.win1252 rules psql stats_ext ok 147 + rules 1402 ms ok 148 + psql 1422 ms ok 149 + psql_crosstab 860 ms ok 150 + amutils 300 ms ok 151 + stats_ext 5127 ms ok 152 + collate.linux.utf8 293 ms ok 153 + collate.windows.win1252 855 ms not ok 154 - select_parallel 7551 ms ok 155 - write_parallel 991 ms ok 156 - vacuum_parallel 462 ms # parallel group (2 tests): subscription publication ok 157 + publication 1584 ms ok 158 + subscription 384 ms # parallel group (17 tests): portals_p2 advisory_lock xmlmap dependency equivclass combocid functional_deps select_views tsdicts guc tsearch window indirect_toast bitmapops cluster foreign_data foreign_key ok 159 + select_views 1040 ms ok 160 + portals_p2 776 ms ok 161 + foreign_key 4739 ms ok 162 + cluster 2464 ms ok 163 + dependency 884 ms ok 164 + guc 1430 ms ok 165 + bitmapops 2314 ms ok 166 + combocid 899 ms ok 167 + tsearch 1807 ms ok 168 + tsdicts 1296 ms ok 169 + foreign_data 2639 ms ok 170 + window 2057 ms ok 171 + xmlmap 790 ms ok 172 + functional_deps 890 ms ok 173 + advisory_lock 757 ms ok 174 + indirect_toast 2301 ms ok 175 + equivclass 879 ms # parallel group (7 tests): json_encoding jsonpath_encoding jsonpath sqljson json jsonb_jsonpath jsonb ok 176 + json 1124 ms ok 177 + jsonb 1508 ms ok 178 + json_encoding 288 ms ok 179 + jsonpath 337 ms ok 180 + jsonpath_encoding 335 ms ok 181 + jsonb_jsonpath 1117 ms ok 182 + sqljson 1114 ms # parallel group (18 tests): prepare limit returning copy2 plancache largeobject temp xml rowtypes conversion with sequence rangefuncs polymorphism domain truncate plpgsql alter_table ok 183 + plancache 1368 ms ok 184 + limit 847 ms ok 185 + plpgsql 3515 ms ok 186 + copy2 1364 ms ok 187 + temp 1499 ms ok 188 + domain 2017 ms ok 189 + rangefuncs 1782 ms ok 190 + prepare 837 ms ok 191 + conversion 1509 ms ok 192 + truncate 2033 ms ok 193 + alter_table 5507 ms ok 194 + sequence 1618 ms ok 195 + polymorphism 1772 ms ok 196 + rowtypes 1501 ms ok 197 + returning 981 ms ok 198 + largeobject 1413 ms ok 199 + with 1499 ms ok 200 + xml 1494 ms # parallel group (13 tests): hash_part predicate partition_info compression memoize reloptions explain indexing partition_join stats tuplesort partition_aggregate partition_prune ok 201 + partition_join 4001 ms ok 202 + partition_prune 6604 ms ok 203 + reloptions 1721 ms ok 204 + hash_part 922 ms ok 205 + indexing 2933 ms ok 206 + partition_aggregate 5094 ms ok 207 + partition_info 1276 ms ok 208 + tuplesort 4852 ms ok 209 + explain 1857 ms ok 210 + compression 1272 ms ok 211 + memoize 1486 ms ok 212 + stats 4543 ms ok 213 + predicate 1210 ms # parallel group (2 tests): oidjoins event_trigger ok 214 + oidjoins 849 ms ok 215 + event_trigger 1033 ms ok 216 - event_trigger_login 453 ms ok 217 - fast_default 747 ms ok 218 - tablespace 1103 ms 1..218 # 1 of 218 tests failed. # The differences that caused some tests to fail can be viewed in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.diffs". # A copy of the test summary that you see above is saved in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.out". === dumping C:\cirrus\build/testrun/recovery/027_stream_regress\data/regression.diffs === diff -w -U3 C:/cirrus/src/test/regress/expected/select_parallel.out C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/select_parallel.out --- C:/cirrus/src/test/regress/expected/select_parallel.out 2024-03-09 02:15:34.350244500 +0000 +++ C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/select_parallel.out 2024-03-09 02:20:46.867872600 +0000 @@ -452,25 +452,35 @@ where tenk1.four = t.four ); QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - Seq Scan on public.tenk1 t +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Hash Join Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 - Filter: (SubPlan 1) - SubPlan 1 - -> Hash Join - Output: t.two - Hash Cond: (tenk1.stringu1 = t3.stringu1) - -> Seq Scan on public.tenk1 - Output: tenk1.unique1, tenk1.unique2, tenk1.two, tenk1.four, tenk1.ten, tenk1.twenty, tenk1.hundred, tenk1.thousand, tenk1.twothousand, tenk1.fivethous, tenk1.tenthous, tenk1.odd, tenk1.even, tenk1.stringu1, tenk1.stringu2, tenk1.string4 - Filter: (tenk1.four = t.four) + Inner Unique: true + Hash Cond: (t.four = tenk1.four) + -> Gather + Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 + Workers Planned: 4 + -> Parallel Seq Scan on public.tenk1 t + Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 + Filter: (t.two IS NOT NULL) -> Hash - Output: t3.stringu1 + Output: tenk1.four + -> HashAggregate + Output: tenk1.four + Group Key: tenk1.four -> Gather - Output: t3.stringu1 + Output: tenk1.four Workers Planned: 4 + -> Parallel Hash Join + Output: tenk1.four + Hash Cond: (tenk1.stringu1 = t3.stringu1) + -> Parallel Seq Scan on public.tenk1 + Output: tenk1.unique1, tenk1.unique2, tenk1.two, tenk1.four, tenk1.ten, tenk1.twenty, tenk1.hundred, tenk1.thousand, tenk1.twothousand, tenk1.fivethous, tenk1.tenthous, tenk1.odd, tenk1.even, tenk1.stringu1, tenk1.stringu2, tenk1.string4 + -> Parallel Hash + Output: t3.stringu1 -> Parallel Seq Scan on public.tenk1 t3 Output: t3.stringu1 -(17 rows) +(27 rows) -- this is not parallel-safe due to use of random() within SubLink's testexpr: explain (costs off) === EOF === [02:21:19.816](107.481s) not ok 2 - regression tests pass [02:21:19.816](0.000s) # Failed test 'regression tests pass' # at C:/cirrus/src/test/recovery/t/027_stream_regress.pl line 95. [02:21:19.817](0.001s) # got: '256' # expected: '0' -1 1 1 1 2 1 9 1 1 1 4001 2 5 5 1 17 3 4 3 4 4 1 32 41 1 1 1 6 1 104 2 1 -1 -32768 32767 5 1006 5 33 34 1 1 1 1 -2 9 1 46 1 1 Waiting for replication conn standby_1's replay_lsn to pass 0/141ED780 on primary done # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump --no-sync -p 55166 --no-unlogged-table-data [02:21:24.389](4.573s) ok 3 - dump primary server # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump --no-sync -p 55167 [02:21:28.386](3.997s) ok 4 - dump standby server # Running: diff C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump [02:21:28.564](0.178s) ok 5 - compare primary and standby dumps [02:21:29.186](0.622s) ok 6 - check contents of pg_stat_statements on regression database ### Stopping node "standby_1" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "standby_1" ### Stopping node "primary" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "primary" [02:21:29.480](0.293s) 1..6 [02:21:29.486](0.007s) # Looks like you failed 1 test of 6.