# Checking port 60079 # Found port 60079 Name: primary Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/archives Connection string: port=60079 host=C:/Windows/TEMP/wv_bQhh5Rr Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log [19:29:21.330](0.309s) # initializing database system by copying initdb template # Running: robocopy /E /NJS /NJH /NFL /NDL /NP C:/cirrus/build/tmp_install/initdb-template C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata # Running: C:\cirrus\build\src/test\regress\pg_regress.exe --config-auth C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata ### Starting node "primary" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log -o --cluster-name=primary start waiting for server to start.... done server started # Postmaster PID for node "primary" is 6348 (standby_1,) [19:29:22.897](1.567s) ok 1 - physical slot created on primary # Taking pg_basebackup my_backup from node "primary" # Running: pg_basebackup -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup/my_backup -h C:/Windows/TEMP/wv_bQhh5Rr -p 60079 --checkpoint fast --no-sync # Backup finished # Checking port 60080 # Found port 60080 Name: standby_1 Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/archives Connection string: port=60080 host=C:/Windows/TEMP/wv_bQhh5Rr Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log # Initializing node "standby_1" from backup "my_backup" of node "primary" ### Enabling streaming replication for node "standby_1" ### Starting node "standby_1" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log -o --cluster-name=standby_1 start waiting for server to start.... done server started # Postmaster PID for node "standby_1" is 3848 # using postmaster on C:/Windows/TEMP/wv_bQhh5Rr, port 60079 ok 1 - test_setup 758 ms # parallel group (20 tests): char name boolean text int8 oid txid float8 enum bit varchar int2 money pg_lsn float4 uuid regproc int4 rangetypes numeric ok 2 + boolean 768 ms ok 3 + char 647 ms ok 4 + name 677 ms ok 5 + varchar 936 ms ok 6 + text 783 ms ok 7 + int2 985 ms ok 8 + int4 1053 ms ok 9 + int8 779 ms ok 10 + oid 778 ms ok 11 + float4 1025 ms ok 12 + float8 885 ms ok 13 + bit 923 ms ok 14 + numeric 1917 ms ok 15 + txid 770 ms ok 16 + uuid 1039 ms ok 17 + enum 878 ms ok 18 + money 968 ms ok 19 + rangetypes 1492 ms ok 20 + pg_lsn 1010 ms ok 21 + regproc 1034 ms # parallel group (20 tests): lseg md5 path circle point time timetz numerology macaddr8 line date timestamp inet macaddr interval timestamptz strings multirangetypes box polygon ok 22 + strings 1858 ms ok 23 + md5 608 ms ok 24 + numerology 841 ms ok 25 + point 714 ms ok 26 + lseg 598 ms ok 27 + line 839 ms ok 28 + box 1846 ms ok 29 + path 634 ms ok 30 + polygon 1844 ms ok 31 + circle 629 ms ok 32 + date 828 ms ok 33 + time 697 ms ok 34 + timetz 802 ms ok 35 + timestamp 1544 ms ok 36 + timestamptz 1810 ms ok 37 + interval 1730 ms ok 38 + inet 1538 ms ok 39 + macaddr 1536 ms ok 40 + macaddr8 811 ms ok 41 + multirangetypes 1817 ms # parallel group (12 tests): geometry misc_sanity regex xid expressions unicode type_sanity tstypes comments opr_sanity mvcc horology ok 42 + geometry 1320 ms ok 43 + horology 1550 ms ok 44 + tstypes 1395 ms ok 45 + regex 1339 ms ok 46 + type_sanity 1375 ms ok 47 + opr_sanity 1412 ms ok 48 + misc_sanity 1311 ms ok 49 + comments 1409 ms ok 50 + expressions 1332 ms ok 51 + unicode 1367 ms ok 52 + xid 1329 ms ok 53 + mvcc 1443 ms # parallel group (5 tests): copydml copyselect copy insert_conflict insert ok 54 + copy 717 ms ok 55 + copyselect 397 ms ok 56 + copydml 321 ms ok 57 + insert 1823 ms ok 58 + insert_conflict 831 ms # parallel group (7 tests): create_function_c create_operator create_type create_schema create_procedure create_misc create_table ok 59 + create_function_c 300 ms ok 60 + create_misc 429 ms ok 61 + create_operator 351 ms ok 62 + create_procedure 426 ms ok 63 + create_table 1367 ms ok 64 + create_type 367 ms ok 65 + create_schema 397 ms # parallel group (5 tests): create_view index_including index_including_gist create_index_spgist create_index ok 66 + create_index 3288 ms ok 67 + create_index_spgist 1980 ms ok 68 + create_view 1065 ms ok 69 + index_including 1107 ms ok 70 + index_including_gist 1290 ms # parallel group (16 tests): create_aggregate create_cast drop_if_exists infinite_recurse hash_func roleattributes errors create_function_sql select create_am typed_table constraints vacuum updatable_views inherit triggers ok 71 + create_aggregate 865 ms ok 72 + create_function_sql 1623 ms ok 73 + create_cast 862 ms ok 74 + constraints 2423 ms ok 75 + triggers 5696 ms ok 76 + select 1617 ms ok 77 + inherit 4581 ms ok 78 + typed_table 1652 ms ok 79 + vacuum 3753 ms ok 80 + drop_if_exists 887 ms ok 81 + updatable_views 3920 ms ok 82 + roleattributes 1604 ms ok 83 + create_am 1644 ms ok 84 + hash_func 1360 ms ok 85 + errors 1602 ms ok 86 + infinite_recurse 1072 ms ok 87 - sanity_check 888 ms # parallel group (20 tests): select_having random namespace select_distinct_on delete select_implicit case select_into prepared_xacts portals union transactions select_distinct subselect arrays hash_index update join aggregates btree_index ok 88 + select_into 1842 ms ok 89 + select_distinct 2604 ms ok 90 + select_distinct_on 1170 ms ok 91 + select_implicit 1437 ms ok 92 + select_having 858 ms ok 93 + subselect 3295 ms ok 94 + union 2592 ms ok 95 + case 1830 ms ok 96 + join 4130 ms ok 97 + aggregates 5392 ms ok 98 + transactions 2589 ms ok 99 + random 847 ms ok 100 + portals 1861 ms ok 101 + arrays 3841 ms ok 102 + btree_index 7250 ms ok 103 + hash_index 4028 ms ok 104 + update 4073 ms ok 105 + delete 1146 ms ok 106 + namespace 836 ms ok 107 + prepared_xacts 1812 ms # parallel group (20 tests): lock init_privs tablesample drop_operator security_label spgist groupingsets password brin gin matview object_address replica_identity collate identity rowsecurity generated gist join_hash privileges ok 108 + brin 3203 ms ok 109 + gin 3202 ms ok 110 + gist 5649 ms ok 111 + spgist 3091 ms ok 112 + privileges 14051 ms ok 113 + init_privs 2368 ms ok 114 + security_label 2785 ms ok 115 + collate 4044 ms ok 116 + matview 3192 ms ok 117 + lock 2362 ms ok 118 + replica_identity 3921 ms ok 119 + rowsecurity 4279 ms ok 120 + object_address 3191 ms ok 121 + tablesample 2356 ms ok 122 + groupingsets 3168 ms ok 123 + drop_operator 2393 ms ok 124 + password 3165 ms ok 125 + identity 4199 ms ok 126 + generated 4369 ms ok 127 + join_hash 11776 ms # parallel group (2 tests): brin_bloom brin_multi ok 128 + brin_bloom 330 ms ok 129 + brin_multi 1770 ms # parallel group (17 tests): async alter_operator dbsize misc collate.icu.utf8 sysviews tid tidscan tsrf tidrangescan misc_functions without_overlaps create_role incremental_sort alter_generic merge create_table_like ok 130 + create_table_like 1955 ms ok 131 + alter_generic 1471 ms ok 132 + alter_operator 957 ms ok 133 + misc 1016 ms ok 134 + async 952 ms ok 135 + dbsize 1010 ms ok 136 + merge 1713 ms ok 137 + misc_functions 1259 ms ok 138 + sysviews 1003 ms ok 139 + tsrf 1137 ms ok 140 + tid 999 ms ok 141 + tidscan 1133 ms ok 142 + tidrangescan 1133 ms ok 143 + collate.icu.utf8 994 ms ok 144 + incremental_sort 1306 ms ok 145 + create_role 1304 ms ok 146 + without_overlaps 1242 ms # parallel group (7 tests): collate.linux.utf8 amutils psql_crosstab collate.windows.win1252 psql rules stats_ext ok 147 + rules 1886 ms ok 148 + psql 1843 ms ok 149 + psql_crosstab 674 ms ok 150 + amutils 517 ms ok 151 + stats_ext 5874 ms ok 152 + collate.linux.utf8 514 ms ok 153 + collate.windows.win1252 1263 ms not ok 154 - select_parallel 8636 ms ok 155 - write_parallel 1019 ms ok 156 - vacuum_parallel 415 ms # parallel group (2 tests): subscription publication ok 157 + publication 1428 ms ok 158 + subscription 568 ms # parallel group (17 tests): select_views tsearch equivclass advisory_lock xmlmap portals_p2 tsdicts functional_deps dependency combocid window indirect_toast guc bitmapops cluster foreign_data foreign_key ok 159 + select_views 1004 ms ok 160 + portals_p2 1653 ms ok 161 + foreign_key 4247 ms ok 162 + cluster 2025 ms ok 163 + dependency 1839 ms ok 164 + guc 1956 ms ok 165 + bitmapops 1955 ms ok 166 + combocid 1881 ms ok 167 + tsearch 1169 ms ok 168 + tsdicts 1832 ms ok 169 + foreign_data 2522 ms ok 170 + window 1888 ms ok 171 + xmlmap 1637 ms ok 172 + functional_deps 1826 ms ok 173 + advisory_lock 1374 ms ok 174 + indirect_toast 1883 ms ok 175 + equivclass 1371 ms # parallel group (7 tests): jsonpath_encoding json_encoding sqljson jsonpath json jsonb_jsonpath jsonb ok 176 + json 732 ms ok 177 + jsonb 1232 ms ok 178 + json_encoding 532 ms ok 179 + jsonpath 727 ms ok 180 + jsonpath_encoding 329 ms ok 181 + jsonb_jsonpath 724 ms ok 182 + sqljson 648 ms # parallel group (18 tests): returning prepare limit plancache conversion rowtypes largeobject with xml copy2 temp sequence polymorphism rangefuncs domain truncate plpgsql alter_table ok 183 + plancache 1374 ms ok 184 + limit 1262 ms ok 185 + plpgsql 4200 ms ok 186 + copy2 2046 ms ok 187 + temp 2044 ms ok 188 + domain 2877 ms ok 189 + rangefuncs 2865 ms ok 190 + prepare 1254 ms ok 191 + conversion 1535 ms ok 192 + truncate 2904 ms ok 193 + alter_table 6181 ms ok 194 + sequence 2036 ms ok 195 + polymorphism 2833 ms ok 196 + rowtypes 1747 ms ok 197 + returning 1075 ms ok 198 + largeobject 1845 ms ok 199 + with 2017 ms ok 200 + xml 2015 ms # parallel group (13 tests): hash_part predicate reloptions compression partition_info memoize explain partition_join indexing stats tuplesort partition_aggregate partition_prune ok 201 + partition_join 3120 ms ok 202 + partition_prune 6896 ms ok 203 + reloptions 893 ms ok 204 + hash_part 842 ms ok 205 + indexing 3932 ms ok 206 + partition_aggregate 5323 ms ok 207 + partition_info 1552 ms ok 208 + tuplesort 4648 ms ok 209 + explain 2295 ms ok 210 + compression 1076 ms ok 211 + memoize 2079 ms ok 212 + stats 4495 ms ok 213 + predicate 829 ms # parallel group (2 tests): oidjoins event_trigger ok 214 + oidjoins 1397 ms ok 215 + event_trigger 1451 ms ok 216 - event_trigger_login 345 ms ok 217 - fast_default 530 ms ok 218 - tablespace 1187 ms 1..218 # 1 of 218 tests failed. # The differences that caused some tests to fail can be viewed in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.diffs". # A copy of the test summary that you see above is saved in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.out". === dumping C:\cirrus\build/testrun/recovery/027_stream_regress\data/regression.diffs === diff -w -U3 C:/cirrus/src/test/regress/expected/select_parallel.out C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/select_parallel.out --- C:/cirrus/src/test/regress/expected/select_parallel.out 2024-02-23 19:25:17.578967300 +0000 +++ C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/select_parallel.out 2024-02-23 19:30:45.295481500 +0000 @@ -452,25 +452,35 @@ where tenk1.four = t.four ); QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - Seq Scan on public.tenk1 t +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Hash Join Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 - Filter: (SubPlan 1) - SubPlan 1 - -> Hash Join - Output: t.two - Hash Cond: (tenk1.stringu1 = t3.stringu1) - -> Seq Scan on public.tenk1 - Output: tenk1.unique1, tenk1.unique2, tenk1.two, tenk1.four, tenk1.ten, tenk1.twenty, tenk1.hundred, tenk1.thousand, tenk1.twothousand, tenk1.fivethous, tenk1.tenthous, tenk1.odd, tenk1.even, tenk1.stringu1, tenk1.stringu2, tenk1.string4 - Filter: (tenk1.four = t.four) + Inner Unique: true + Hash Cond: (t.four = tenk1.four) + -> Gather + Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 + Workers Planned: 4 + -> Parallel Seq Scan on public.tenk1 t + Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 + Filter: (t.two IS NOT NULL) -> Hash - Output: t3.stringu1 + Output: tenk1.four + -> HashAggregate + Output: tenk1.four + Group Key: tenk1.four -> Gather - Output: t3.stringu1 + Output: tenk1.four Workers Planned: 4 + -> Parallel Hash Join + Output: tenk1.four + Hash Cond: (tenk1.stringu1 = t3.stringu1) + -> Parallel Seq Scan on public.tenk1 + Output: tenk1.unique1, tenk1.unique2, tenk1.two, tenk1.four, tenk1.ten, tenk1.twenty, tenk1.hundred, tenk1.thousand, tenk1.twothousand, tenk1.fivethous, tenk1.tenthous, tenk1.odd, tenk1.even, tenk1.stringu1, tenk1.stringu2, tenk1.string4 + -> Parallel Hash + Output: t3.stringu1 -> Parallel Seq Scan on public.tenk1 t3 Output: t3.stringu1 -(17 rows) +(27 rows) -- this is not parallel-safe due to use of random() within SubLink's testexpr: explain (costs off) === EOF === [19:31:21.734](118.837s) not ok 2 - regression tests pass [19:31:21.734](0.000s) # Failed test 'regression tests pass' # at C:/cirrus/src/test/recovery/t/027_stream_regress.pl line 95. [19:31:21.734](0.000s) # got: '256' # expected: '0' 1 1 1 1 1 2 1 4001 9 41 5 5 3 4 3 4 4 1 32 1 1 1 6 104 2 1 5 1006 1 2 5 17 -2 1 1 33 34 9 1 1 1 1 1 -1 1 1 -1 -32768 32767 46 Waiting for replication conn standby_1's replay_lsn to pass 0/14354628 on primary done # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump --no-sync -p 60079 --no-unlogged-table-data [19:31:27.856](6.121s) ok 3 - dump primary server # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump --no-sync -p 60080 [19:31:33.159](5.304s) ok 4 - dump standby server # Running: diff C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump [19:31:33.311](0.152s) ok 5 - compare primary and standby dumps [19:31:34.162](0.851s) ok 6 - check contents of pg_stat_statements on regression database ### Stopping node "standby_1" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "standby_1" ### Stopping node "primary" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "primary" [19:31:34.602](0.440s) 1..6 [19:31:34.617](0.015s) # Looks like you failed 1 test of 6.