# Checking port 62020 # Found port 62020 Name: primary Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/archives Connection string: port=62020 host=C:/Windows/TEMP/rAOW1gsSzJ Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log [08:39:43.662](0.119s) # initializing database system by copying initdb template # Running: robocopy /E /NJS /NJH /NFL /NDL /NP C:/cirrus/build/tmp_install/initdb-template C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata # Running: C:\cirrus\build\src/test\regress\pg_regress.exe --config-auth C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata ### Starting node "primary" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log -o --cluster-name=primary start waiting for server to start.... done server started # Postmaster PID for node "primary" is 6296 (standby_1,) [08:39:45.890](2.228s) ok 1 - physical slot created on primary # Taking pg_basebackup my_backup from node "primary" # Running: pg_basebackup -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup/my_backup -h C:/Windows/TEMP/rAOW1gsSzJ -p 62020 --checkpoint fast --no-sync # Backup finished # Checking port 62021 # Found port 62021 Name: standby_1 Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/archives Connection string: port=62021 host=C:/Windows/TEMP/rAOW1gsSzJ Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log # Initializing node "standby_1" from backup "my_backup" of node "primary" ### Enabling streaming replication for node "standby_1" ### Starting node "standby_1" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log -o --cluster-name=standby_1 start waiting for server to start.... done server started # Postmaster PID for node "standby_1" is 692 # using postmaster on C:/Windows/TEMP/rAOW1gsSzJ, port 62020 ok 1 - test_setup 715 ms # parallel group (20 tests): char int2 oid text money uuid pg_lsn float4 name txid regproc float8 bit boolean int4 varchar int8 enum rangetypes numeric ok 2 + boolean 1321 ms ok 3 + char 445 ms ok 4 + name 696 ms ok 5 + varchar 1411 ms ok 6 + text 539 ms ok 7 + int2 505 ms ok 8 + int4 1406 ms ok 9 + int8 1464 ms ok 10 + oid 532 ms ok 11 + float4 626 ms ok 12 + float8 1259 ms ok 13 + bit 1304 ms ok 14 + numeric 1769 ms ok 15 + txid 790 ms ok 16 + uuid 532 ms ok 17 + enum 1494 ms ok 18 + money 526 ms ok 19 + rangetypes 1544 ms ok 20 + pg_lsn 610 ms ok 21 + regproc 780 ms # parallel group (20 tests): point numerology md5 lseg time line timetz strings macaddr circle inet macaddr8 path timestamp interval date timestamptz box multirangetypes polygon ok 22 + strings 872 ms ok 23 + md5 675 ms ok 24 + numerology 673 ms ok 25 + point 626 ms ok 26 + lseg 672 ms ok 27 + line 755 ms ok 28 + box 1117 ms ok 29 + path 974 ms ok 30 + polygon 1268 ms ok 31 + circle 858 ms ok 32 + date 1041 ms ok 33 + time 745 ms ok 34 + timetz 852 ms ok 35 + timestamp 999 ms ok 36 + timestamptz 1103 ms ok 37 + interval 996 ms ok 38 + inet 959 ms ok 39 + macaddr 845 ms ok 40 + macaddr8 956 ms ok 41 + multirangetypes 1104 ms # parallel group (12 tests): unicode xid tstypes mvcc misc_sanity geometry type_sanity expressions horology comments regex opr_sanity ok 42 + geometry 1504 ms ok 43 + horology 1534 ms ok 44 + tstypes 1476 ms ok 45 + regex 1542 ms ok 46 + type_sanity 1497 ms ok 47 + opr_sanity 1642 ms ok 48 + misc_sanity 1494 ms ok 49 + comments 1535 ms ok 50 + expressions 1522 ms ok 51 + unicode 663 ms ok 52 + xid 1461 ms ok 53 + mvcc 1460 ms # parallel group (5 tests): copydml copyselect copy insert_conflict insert ok 54 + copy 912 ms ok 55 + copyselect 616 ms ok 56 + copydml 526 ms ok 57 + insert 2069 ms ok 58 + insert_conflict 1056 ms # parallel group (7 tests): create_function_c create_schema create_operator create_procedure create_misc create_type create_table ok 59 + create_function_c 311 ms ok 60 + create_misc 405 ms ok 61 + create_operator 402 ms ok 62 + create_procedure 400 ms ok 63 + create_table 1697 ms ok 64 + create_type 399 ms ok 65 + create_schema 394 ms # parallel group (5 tests): index_including create_view index_including_gist create_index_spgist create_index ok 66 + create_index 2694 ms ok 67 + create_index_spgist 1610 ms ok 68 + create_view 1292 ms ok 69 + index_including 1249 ms ok 70 + index_including_gist 1374 ms # parallel group (16 tests): create_aggregate select errors infinite_recurse create_cast hash_func typed_table create_function_sql create_am roleattributes drop_if_exists constraints vacuum updatable_views inherit triggers ok 71 + create_aggregate 457 ms ok 72 + create_function_sql 992 ms ok 73 + create_cast 635 ms ok 74 + constraints 1492 ms ok 75 + triggers 4951 ms ok 76 + select 567 ms ok 77 + inherit 3577 ms ok 78 + typed_table 927 ms ok 79 + vacuum 2625 ms ok 80 + drop_if_exists 1287 ms ok 81 + updatable_views 3250 ms ok 82 + roleattributes 978 ms ok 83 + create_am 976 ms ok 84 + hash_func 794 ms ok 85 + errors 559 ms ok 86 + infinite_recurse 612 ms ok 87 - sanity_check 1000 ms # parallel group (20 tests): select_into select_distinct_on select_having delete case random namespace select_implicit prepared_xacts portals subselect union arrays select_distinct transactions update hash_index join aggregates btree_index ok 88 + select_into 1797 ms ok 89 + select_distinct 2912 ms ok 90 + select_distinct_on 1794 ms ok 91 + select_implicit 1948 ms ok 92 + select_having 1791 ms ok 93 + subselect 2782 ms ok 94 + union 2781 ms ok 95 + case 1786 ms ok 96 + join 4559 ms ok 97 + aggregates 5249 ms ok 98 + transactions 3034 ms ok 99 + random 1934 ms ok 100 + portals 2771 ms ok 101 + arrays 2892 ms ok 102 + btree_index 7233 ms ok 103 + hash_index 4421 ms ok 104 + update 3132 ms ok 105 + delete 1769 ms ok 106 + namespace 1923 ms ok 107 + prepared_xacts 2551 ms # parallel group (20 tests): lock password object_address collate security_label drop_operator identity init_privs tablesample replica_identity rowsecurity matview generated gin groupingsets spgist gist brin join_hash privileges ok 108 + brin 9871 ms ok 109 + gin 3211 ms ok 110 + gist 4870 ms ok 111 + spgist 4156 ms ok 112 + privileges 11748 ms ok 113 + init_privs 3111 ms ok 114 + security_label 2289 ms ok 115 + collate 2287 ms ok 116 + matview 3129 ms ok 117 + lock 1297 ms ok 118 + replica_identity 3125 ms ok 119 + rowsecurity 3123 ms ok 120 + object_address 2126 ms ok 121 + tablesample 3119 ms ok 122 + groupingsets 3191 ms ok 123 + drop_operator 2275 ms ok 124 + password 1287 ms ok 125 + identity 2634 ms ok 126 + generated 3185 ms ok 127 + join_hash 9851 ms # parallel group (2 tests): brin_bloom brin_multi ok 128 + brin_bloom 356 ms ok 129 + brin_multi 2216 ms # parallel group (17 tests): collate.icu.utf8 dbsize tid async tidscan tidrangescan sysviews alter_operator create_role alter_generic misc without_overlaps misc_functions create_table_like tsrf incremental_sort merge ok 130 + create_table_like 1711 ms ok 131 + alter_generic 1706 ms ok 132 + alter_operator 1642 ms ok 133 + misc 1703 ms ok 134 + async 1007 ms ok 135 + dbsize 715 ms ok 136 + merge 2053 ms ok 137 + misc_functions 1700 ms ok 138 + sysviews 1580 ms ok 139 + tsrf 1702 ms ok 140 + tid 992 ms ok 141 + tidscan 1216 ms ok 142 + tidrangescan 1215 ms ok 143 + collate.icu.utf8 700 ms ok 144 + incremental_sort 2002 ms ok 145 + create_role 1677 ms ok 146 + without_overlaps 1681 ms # parallel group (7 tests): collate.linux.utf8 amutils psql_crosstab collate.windows.win1252 rules psql stats_ext ok 147 + rules 2333 ms ok 148 + psql 2954 ms ok 149 + psql_crosstab 1593 ms ok 150 + amutils 1591 ms ok 151 + stats_ext 5942 ms ok 152 + collate.linux.utf8 1106 ms ok 153 + collate.windows.win1252 1688 ms not ok 154 - select_parallel 7487 ms ok 155 - write_parallel 839 ms ok 156 - vacuum_parallel 482 ms # parallel group (2 tests): subscription publication ok 157 + publication 1388 ms ok 158 + subscription 422 ms # parallel group (17 tests): xmlmap combocid equivclass advisory_lock select_views tsdicts functional_deps cluster portals_p2 tsearch guc dependency bitmapops indirect_toast window foreign_data foreign_key ok 159 + select_views 1886 ms ok 160 + portals_p2 2097 ms ok 161 + foreign_key 4859 ms ok 162 + cluster 2094 ms ok 163 + dependency 2093 ms ok 164 + guc 2089 ms ok 165 + bitmapops 2089 ms ok 166 + combocid 1142 ms ok 167 + tsearch 2082 ms ok 168 + tsdicts 1867 ms ok 169 + foreign_data 2757 ms ok 170 + window 2407 ms ok 171 + xmlmap 1025 ms ok 172 + functional_deps 2036 ms ok 173 + advisory_lock 1580 ms ok 174 + indirect_toast 2261 ms ok 175 + equivclass 1125 ms # parallel group (7 tests): json_encoding jsonb_jsonpath json sqljson jsonpath_encoding jsonpath jsonb ok 176 + json 644 ms ok 177 + jsonb 1110 ms ok 178 + json_encoding 559 ms ok 179 + jsonpath 869 ms ok 180 + jsonpath_encoding 808 ms ok 181 + jsonb_jsonpath 637 ms ok 182 + sqljson 691 ms # parallel group (18 tests): prepare limit conversion returning sequence plancache xml largeobject with truncate polymorphism rowtypes copy2 domain temp rangefuncs plpgsql alter_table ok 183 + plancache 2275 ms ok 184 + limit 1350 ms ok 185 + plpgsql 4313 ms ok 186 + copy2 2829 ms ok 187 + temp 2898 ms ok 188 + domain 2826 ms ok 189 + rangefuncs 3150 ms ok 190 + prepare 1113 ms ok 191 + conversion 1341 ms ok 192 + truncate 2818 ms ok 193 + alter_table 5906 ms ok 194 + sequence 2259 ms ok 195 + polymorphism 2816 ms ok 196 + rowtypes 2815 ms ok 197 + returning 1880 ms ok 198 + largeobject 2262 ms ok 199 + with 2261 ms ok 200 + xml 2251 ms # parallel group (13 tests): reloptions predicate compression partition_info hash_part explain memoize indexing stats partition_join tuplesort partition_aggregate partition_prune ok 201 + partition_join 3598 ms ok 202 + partition_prune 6264 ms ok 203 + reloptions 1160 ms ok 204 + hash_part 1523 ms ok 205 + indexing 2941 ms ok 206 + partition_aggregate 4256 ms ok 207 + partition_info 1156 ms ok 208 + tuplesort 3992 ms ok 209 + explain 1790 ms ok 210 + compression 1151 ms ok 211 + memoize 1855 ms ok 212 + stats 3557 ms ok 213 + predicate 1146 ms # parallel group (2 tests): oidjoins event_trigger ok 214 + oidjoins 717 ms ok 215 + event_trigger 763 ms ok 216 - event_trigger_login 315 ms ok 217 - fast_default 561 ms ok 218 - tablespace 1801 ms 1..218 # 1 of 218 tests failed. # The differences that caused some tests to fail can be viewed in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.diffs". # A copy of the test summary that you see above is saved in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.out". === dumping C:\cirrus\build/testrun/recovery/027_stream_regress\data/regression.diffs === diff -w -U3 C:/cirrus/src/test/regress/expected/select_parallel.out C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/select_parallel.out --- C:/cirrus/src/test/regress/expected/select_parallel.out 2024-03-19 08:35:45.518445500 +0000 +++ C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/select_parallel.out 2024-03-19 08:41:13.493130900 +0000 @@ -452,25 +452,35 @@ where tenk1.four = t.four ); QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - Seq Scan on public.tenk1 t +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Hash Join Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 - Filter: (SubPlan 1) - SubPlan 1 - -> Hash Join - Output: t.two - Hash Cond: (tenk1.stringu1 = t3.stringu1) - -> Seq Scan on public.tenk1 - Output: tenk1.unique1, tenk1.unique2, tenk1.two, tenk1.four, tenk1.ten, tenk1.twenty, tenk1.hundred, tenk1.thousand, tenk1.twothousand, tenk1.fivethous, tenk1.tenthous, tenk1.odd, tenk1.even, tenk1.stringu1, tenk1.stringu2, tenk1.string4 - Filter: (tenk1.four = t.four) + Inner Unique: true + Hash Cond: (t.four = tenk1.four) + -> Gather + Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 + Workers Planned: 4 + -> Parallel Seq Scan on public.tenk1 t + Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 + Filter: (t.two IS NOT NULL) -> Hash - Output: t3.stringu1 + Output: tenk1.four + -> HashAggregate + Output: tenk1.four + Group Key: tenk1.four -> Gather - Output: t3.stringu1 + Output: tenk1.four Workers Planned: 4 + -> Parallel Hash Join + Output: tenk1.four + Hash Cond: (tenk1.stringu1 = t3.stringu1) + -> Parallel Seq Scan on public.tenk1 + Output: tenk1.unique1, tenk1.unique2, tenk1.two, tenk1.four, tenk1.ten, tenk1.twenty, tenk1.hundred, tenk1.thousand, tenk1.twothousand, tenk1.fivethous, tenk1.tenthous, tenk1.odd, tenk1.even, tenk1.stringu1, tenk1.stringu2, tenk1.string4 + -> Parallel Hash + Output: t3.stringu1 -> Parallel Seq Scan on public.tenk1 t3 Output: t3.stringu1 -(17 rows) +(27 rows) -- this is not parallel-safe due to use of random() within SubLink's testexpr: explain (costs off) === EOF === [08:41:48.063](122.173s) not ok 2 - regression tests pass [08:41:48.063](0.000s) # Failed test 'regression tests pass' # at C:/cirrus/src/test/recovery/t/027_stream_regress.pl line 95. [08:41:48.063](0.000s) # got: '256' # expected: '0' 1 1 1 2 1 9 1 1 5 5 5 1006 3 4 3 4 4 1 32 1 1 1 6 1 104 2 1 17 1 -1 1 5 1 1 2 -1 -32768 33 4001 41 32767 34 1 1 -2 1 9 1 46 1 Waiting for replication conn standby_1's replay_lsn to pass 0/145DB968 on primary done # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump --no-sync -p 62020 --no-unlogged-table-data [08:41:52.000](3.937s) ok 3 - dump primary server # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump --no-sync -p 62021 [08:41:56.540](4.540s) ok 4 - dump standby server # Running: diff C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump [08:41:56.732](0.192s) ok 5 - compare primary and standby dumps [08:41:57.353](0.621s) ok 6 - check contents of pg_stat_statements on regression database ### Stopping node "standby_1" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "standby_1" ### Stopping node "primary" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "primary" [08:41:57.712](0.359s) 1..6 [08:41:57.722](0.009s) # Looks like you failed 1 test of 6.