# Checking port 59369 # Found port 59369 Name: primary Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/archives Connection string: port=59369 host=C:/Windows/TEMP/ayeRTgdGhg Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log [18:14:05.844](0.059s) # initializing database system by copying initdb template # Running: robocopy /E /NJS /NJH /NFL /NDL /NP C:/cirrus/build/tmp_install/initdb-template C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata # Running: C:\cirrus\build\src/test\regress\pg_regress.exe --config-auth C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata ### Starting node "primary" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log -o --cluster-name=primary start waiting for server to start.... done server started # Postmaster PID for node "primary" is 6108 (standby_1,) [18:14:07.364](1.520s) ok 1 - physical slot created on primary # Taking pg_basebackup my_backup from node "primary" # Running: pg_basebackup -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup/my_backup -h C:/Windows/TEMP/ayeRTgdGhg -p 59369 --checkpoint fast --no-sync # Backup finished # Checking port 59370 # Found port 59370 Name: standby_1 Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/archives Connection string: port=59370 host=C:/Windows/TEMP/ayeRTgdGhg Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log # Initializing node "standby_1" from backup "my_backup" of node "primary" ### Enabling streaming replication for node "standby_1" ### Starting node "standby_1" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log -o --cluster-name=standby_1 start waiting for server to start.... done server started # Postmaster PID for node "standby_1" is 7212 # using postmaster on C:/Windows/TEMP/ayeRTgdGhg, port 59369 ok 1 - test_setup 659 ms # parallel group (20 tests): txid int4 boolean varchar pg_lsn oid uuid float8 money int2 char regproc float4 text bit enum int8 name rangetypes numeric ok 2 + boolean 660 ms ok 3 + char 720 ms ok 4 + name 868 ms ok 5 + varchar 655 ms ok 6 + text 863 ms ok 7 + int2 705 ms ok 8 + int4 650 ms ok 9 + int8 860 ms ok 10 + oid 648 ms ok 11 + float4 820 ms ok 12 + float8 645 ms ok 13 + bit 854 ms ok 14 + numeric 1697 ms ok 15 + txid 506 ms ok 16 + uuid 639 ms ok 17 + enum 848 ms ok 18 + money 668 ms ok 19 + rangetypes 1617 ms ok 20 + pg_lsn 633 ms ok 21 + regproc 693 ms # parallel group (20 tests): md5 lseg line circle path point strings numerology timetz date time polygon inet interval macaddr macaddr8 timestamp multirangetypes box timestamptz ok 22 + strings 1140 ms ok 23 + md5 916 ms ok 24 + numerology 1137 ms ok 25 + point 1136 ms ok 26 + lseg 912 ms ok 27 + line 979 ms ok 28 + box 1368 ms ok 29 + path 1098 ms ok 30 + polygon 1186 ms ok 31 + circle 973 ms ok 32 + date 1126 ms ok 33 + time 1169 ms ok 34 + timetz 1123 ms ok 35 + timestamp 1225 ms ok 36 + timestamptz 1695 ms ok 37 + interval 1216 ms ok 38 + inet 1196 ms ok 39 + macaddr 1213 ms ok 40 + macaddr8 1214 ms ok 41 + multirangetypes 1216 ms # parallel group (12 tests): xid misc_sanity comments unicode tstypes mvcc expressions horology type_sanity geometry regex opr_sanity ok 42 + geometry 721 ms ok 43 + horology 716 ms ok 44 + tstypes 657 ms ok 45 + regex 763 ms ok 46 + type_sanity 714 ms ok 47 + opr_sanity 842 ms ok 48 + misc_sanity 358 ms ok 49 + comments 356 ms ok 50 + expressions 704 ms ok 51 + unicode 354 ms ok 52 + xid 352 ms ok 53 + mvcc 699 ms # parallel group (5 tests): copyselect copydml copy insert_conflict insert ok 54 + copy 640 ms ok 55 + copyselect 411 ms ok 56 + copydml 536 ms ok 57 + insert 1410 ms ok 58 + insert_conflict 917 ms # parallel group (7 tests): create_schema create_operator create_procedure create_type create_function_c create_misc create_table ok 59 + create_function_c 343 ms ok 60 + create_misc 427 ms ok 61 + create_operator 256 ms ok 62 + create_procedure 263 ms ok 63 + create_table 1080 ms ok 64 + create_type 292 ms ok 65 + create_schema 250 ms # parallel group (5 tests): index_including index_including_gist create_view create_index_spgist create_index ok 66 + create_index 3200 ms ok 67 + create_index_spgist 1557 ms ok 68 + create_view 1527 ms ok 69 + index_including 1464 ms ok 70 + index_including_gist 1521 ms # parallel group (16 tests): create_aggregate create_cast typed_table drop_if_exists select infinite_recurse errors hash_func roleattributes create_am create_function_sql constraints vacuum inherit updatable_views triggers ok 71 + create_aggregate 839 ms ok 72 + create_function_sql 1382 ms ok 73 + create_cast 843 ms ok 74 + constraints 2603 ms ok 75 + triggers 4878 ms ok 76 + select 841 ms ok 77 + inherit 3292 ms ok 78 + typed_table 837 ms ok 79 + vacuum 3069 ms ok 80 + drop_if_exists 834 ms ok 81 + updatable_views 3352 ms ok 82 + roleattributes 987 ms ok 83 + create_am 1062 ms ok 84 + hash_func 981 ms ok 85 + errors 980 ms ok 86 + infinite_recurse 978 ms ok 87 - sanity_check 1041 ms # parallel group (20 tests): case select_into random delete select_implicit transactions union select_distinct_on prepared_xacts select_having namespace portals select_distinct arrays subselect hash_index join update aggregates btree_index ok 88 + select_into 1007 ms ok 89 + select_distinct 3038 ms ok 90 + select_distinct_on 2300 ms ok 91 + select_implicit 1040 ms ok 92 + select_having 2590 ms ok 93 + subselect 3104 ms ok 94 + union 1035 ms ok 95 + case 985 ms ok 96 + join 3942 ms ok 97 + aggregates 5153 ms ok 98 + transactions 1029 ms ok 99 + random 991 ms ok 100 + portals 3022 ms ok 101 + arrays 3093 ms ok 102 + btree_index 7270 ms ok 103 + hash_index 3610 ms ok 104 + update 4218 ms ok 105 + delete 1019 ms ok 106 + namespace 2861 ms ok 107 + prepared_xacts 2275 ms # parallel group (20 tests): security_label init_privs password collate lock drop_operator tablesample object_address matview replica_identity spgist groupingsets identity gin generated rowsecurity gist join_hash brin privileges ok 108 + brin 11844 ms ok 109 + gin 4058 ms ok 110 + gist 5084 ms ok 111 + spgist 4028 ms ok 112 + privileges 13528 ms ok 113 + init_privs 1129 ms ok 114 + security_label 1116 ms ok 115 + collate 1188 ms ok 116 + matview 3812 ms ok 117 + lock 1185 ms ok 118 + replica_identity 3976 ms ok 119 + rowsecurity 4940 ms ok 120 + object_address 3128 ms ok 121 + tablesample 2640 ms ok 122 + groupingsets 4013 ms ok 123 + drop_operator 1417 ms ok 124 + password 1114 ms ok 125 + identity 4014 ms ok 126 + generated 4031 ms ok 127 + join_hash 11711 ms # parallel group (2 tests): brin_bloom brin_multi ok 128 + brin_bloom 332 ms ok 129 + brin_multi 1718 ms # parallel group (18 tests): collate.icu.utf8 collate.utf8 dbsize tidscan async tidrangescan tid sysviews misc tsrf create_role misc_functions alter_operator incremental_sort alter_generic create_table_like without_overlaps merge ok 130 + create_table_like 2391 ms ok 131 + alter_generic 2278 ms ok 132 + alter_operator 1963 ms ok 133 + misc 1939 ms ok 134 + async 1609 ms ok 135 + dbsize 1132 ms ok 136 + merge 2572 ms ok 137 + misc_functions 1933 ms ok 138 + sysviews 1821 ms ok 139 + tsrf 1930 ms ok 140 + tid 1802 ms ok 141 + tidscan 1123 ms ok 142 + tidrangescan 1756 ms ok 143 + collate.utf8 1115 ms ok 144 + collate.icu.utf8 1088 ms ok 145 + incremental_sort 2109 ms ok 146 + create_role 1919 ms ok 147 + without_overlaps 2549 ms # parallel group (7 tests): psql_crosstab amutils collate.linux.utf8 collate.windows.win1252 psql rules stats_ext ok 148 + rules 1828 ms ok 149 + psql 1592 ms ok 150 + psql_crosstab 528 ms ok 151 + amutils 734 ms ok 152 + stats_ext 4714 ms ok 153 + collate.linux.utf8 731 ms ok 154 + collate.windows.win1252 1425 ms not ok 155 - select_parallel 6870 ms ok 156 - write_parallel 1049 ms ok 157 - vacuum_parallel 410 ms # parallel group (2 tests): subscription publication ok 158 + publication 1167 ms ok 159 + subscription 382 ms # parallel group (17 tests): portals_p2 xmlmap equivclass select_views guc advisory_lock functional_deps tsdicts dependency combocid tsearch window cluster bitmapops foreign_data indirect_toast foreign_key ok 160 + select_views 1403 ms ok 161 + portals_p2 749 ms ok 162 + foreign_key 4291 ms ok 163 + cluster 2199 ms ok 164 + dependency 1572 ms ok 165 + guc 1489 ms ok 166 + bitmapops 2208 ms ok 167 + combocid 1714 ms ok 168 + tsearch 1712 ms ok 169 + tsdicts 1534 ms ok 170 + foreign_data 3329 ms ok 171 + window 2092 ms ok 172 + xmlmap 1262 ms ok 173 + functional_deps 1529 ms ok 174 + advisory_lock 1527 ms ok 175 + indirect_toast 3382 ms ok 176 + equivclass 1257 ms # parallel group (9 tests): jsonpath jsonpath_encoding json_encoding json sqljson_queryfuncs sqljson_jsontable jsonb_jsonpath sqljson jsonb ok 177 + json 762 ms ok 178 + jsonb 1310 ms ok 179 + json_encoding 434 ms ok 180 + jsonpath 390 ms ok 181 + jsonpath_encoding 388 ms ok 182 + jsonb_jsonpath 832 ms ok 183 + sqljson 834 ms ok 184 + sqljson_queryfuncs 782 ms ok 185 + sqljson_jsontable 805 ms # parallel group (18 tests): limit prepare conversion xml plancache returning sequence polymorphism with largeobject copy2 truncate temp rowtypes rangefuncs domain plpgsql alter_table ok 186 + plancache 1275 ms ok 187 + limit 1086 ms ok 188 + plpgsql 4190 ms ok 189 + copy2 2434 ms ok 190 + temp 2470 ms ok 191 + domain 2794 ms ok 192 + rangefuncs 2610 ms ok 193 + prepare 1079 ms ok 194 + conversion 1153 ms ok 195 + truncate 2425 ms ok 196 + alter_table 6033 ms ok 197 + sequence 1788 ms ok 198 + polymorphism 1787 ms ok 199 + rowtypes 2480 ms ok 200 + returning 1257 ms ok 201 + largeobject 2336 ms ok 202 + with 1783 ms ok 203 + xml 1251 ms # parallel group (15 tests): reloptions hash_part predicate partition_info compression partition_merge partition_split indexing explain memoize partition_join partition_aggregate stats tuplesort partition_prune ok 204 + partition_merge 2689 ms ok 205 + partition_split 3104 ms ok 206 + partition_join 3375 ms ok 207 + partition_prune 6955 ms ok 208 + reloptions 894 ms ok 209 + hash_part 1114 ms ok 210 + indexing 3274 ms ok 211 + partition_aggregate 4117 ms ok 212 + partition_info 2136 ms ok 213 + tuplesort 5559 ms ok 214 + explain 3301 ms ok 215 + compression 2131 ms ok 216 + memoize 3359 ms ok 217 + stats 5181 ms ok 218 + predicate 1592 ms # parallel group (2 tests): oidjoins event_trigger ok 219 + oidjoins 815 ms ok 220 + event_trigger 949 ms ok 221 - event_trigger_login 1032 ms ok 222 - fast_default 742 ms ok 223 - tablespace 1182 ms 1..223 # 1 of 223 tests failed. # The differences that caused some tests to fail can be viewed in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.diffs". # A copy of the test summary that you see above is saved in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.out". === dumping C:\cirrus\build/testrun/recovery/027_stream_regress\data/regression.diffs === diff -w -U3 C:/cirrus/src/test/regress/expected/select_parallel.out C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/select_parallel.out --- C:/cirrus/src/test/regress/expected/select_parallel.out 2024-05-03 18:10:15.150223900 +0000 +++ C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/select_parallel.out 2024-05-03 18:15:32.785798300 +0000 @@ -452,25 +452,35 @@ where tenk1.four = t.four ); QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - Seq Scan on public.tenk1 t +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Hash Join Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 - Filter: (SubPlan 1) - SubPlan 1 - -> Hash Join - Output: t.two - Hash Cond: (tenk1.stringu1 = t3.stringu1) - -> Seq Scan on public.tenk1 - Output: tenk1.unique1, tenk1.unique2, tenk1.two, tenk1.four, tenk1.ten, tenk1.twenty, tenk1.hundred, tenk1.thousand, tenk1.twothousand, tenk1.fivethous, tenk1.tenthous, tenk1.odd, tenk1.even, tenk1.stringu1, tenk1.stringu2, tenk1.string4 - Filter: (tenk1.four = t.four) + Inner Unique: true + Hash Cond: (t.four = tenk1.four) + -> Gather + Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 + Workers Planned: 4 + -> Parallel Seq Scan on public.tenk1 t + Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 + Filter: (t.two IS NOT NULL) -> Hash - Output: t3.stringu1 + Output: tenk1.four + -> HashAggregate + Output: tenk1.four + Group Key: tenk1.four -> Gather - Output: t3.stringu1 + Output: tenk1.four Workers Planned: 4 + -> Parallel Hash Join + Output: tenk1.four + Hash Cond: (tenk1.stringu1 = t3.stringu1) + -> Parallel Seq Scan on public.tenk1 + Output: tenk1.unique1, tenk1.unique2, tenk1.two, tenk1.four, tenk1.ten, tenk1.twenty, tenk1.hundred, tenk1.thousand, tenk1.twothousand, tenk1.fivethous, tenk1.tenthous, tenk1.odd, tenk1.even, tenk1.stringu1, tenk1.stringu2, tenk1.string4 + -> Parallel Hash + Output: t3.stringu1 -> Parallel Seq Scan on public.tenk1 t3 Output: t3.stringu1 -(17 rows) +(27 rows) -- this is not parallel-safe due to use of random() within SubLink's testexpr: explain (costs off) === EOF === [18:16:08.842](121.479s) not ok 2 - regression tests pass [18:16:08.842](0.000s) # Failed test 'regression tests pass' # at C:/cirrus/src/test/recovery/t/027_stream_regress.pl line 95. [18:16:08.843](0.000s) # got: '256' # expected: '0' 1 1 1 1 1 5 5 5 4001 3 4 3 4 4 1 32 1 1 1 6 104 2 1 1006 1 2 41 2 1 9 5 1 17 33 1 34 -2 1 1 1 1 1 -1 1 1 -1 -32768 32767 9 1 46 Waiting for replication conn standby_1's replay_lsn to pass 0/14611938 on primary done # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump --no-sync -p 59369 --no-unlogged-table-data [18:16:14.511](5.668s) ok 3 - dump primary server # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump --no-sync -p 59370 [18:16:19.177](4.666s) ok 4 - dump standby server # Running: diff C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump [18:16:19.446](0.268s) ok 5 - compare primary and standby dumps [18:16:20.115](0.669s) ok 6 - check contents of pg_stat_statements on regression database ### Stopping node "standby_1" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "standby_1" ### Stopping node "primary" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "primary" [18:16:20.586](0.471s) 1..6 [18:16:20.598](0.012s) # Looks like you failed 1 test of 6.