# Checking port 55774 # Found port 55774 Name: primary Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/archives Connection string: port=55774 host=C:/Windows/TEMP/DUUEO2Zy10 Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log [14:50:13.810](0.050s) # initializing database system by copying initdb template # Running: robocopy /E /NJS /NJH /NFL /NDL /NP C:/cirrus/build/tmp_install/initdb-template C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata # Running: C:\cirrus\build\src/test\regress\pg_regress.exe --config-auth C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata ### Starting node "primary" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log -o --cluster-name=primary start waiting for server to start.... done server started # Postmaster PID for node "primary" is 5572 (standby_1,) [14:50:15.635](1.825s) ok 1 - physical slot created on primary # Taking pg_basebackup my_backup from node "primary" # Running: pg_basebackup -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup/my_backup -h C:/Windows/TEMP/DUUEO2Zy10 -p 55774 --checkpoint fast --no-sync # Backup finished # Checking port 55775 # Found port 55775 Name: standby_1 Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/archives Connection string: port=55775 host=C:/Windows/TEMP/DUUEO2Zy10 Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log # Initializing node "standby_1" from backup "my_backup" of node "primary" ### Enabling streaming replication for node "standby_1" ### Starting node "standby_1" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log -o --cluster-name=standby_1 start waiting for server to start.... done server started # Postmaster PID for node "standby_1" is 3796 # using postmaster on C:/Windows/TEMP/DUUEO2Zy10, port 55774 ok 1 - test_setup 733 ms # parallel group (20 tests): char int8 money uuid bit float4 regproc enum pg_lsn float8 name varchar txid oid text int4 int2 boolean rangetypes numeric ok 2 + boolean 1369 ms ok 3 + char 689 ms ok 4 + name 1227 ms ok 5 + varchar 1256 ms ok 6 + text 1255 ms ok 7 + int2 1307 ms ok 8 + int4 1305 ms ok 9 + int8 680 ms ok 10 + oid 1249 ms ok 11 + float4 977 ms ok 12 + float8 1213 ms ok 13 + bit 869 ms ok 14 + numeric 1866 ms ok 15 + txid 1241 ms ok 16 + uuid 864 ms ok 17 + enum 1053 ms ok 18 + money 666 ms ok 19 + rangetypes 1347 ms ok 20 + pg_lsn 1169 ms ok 21 + regproc 983 ms # parallel group (20 tests): date numerology inet circle path time timetz line lseg polygon md5 box point macaddr8 macaddr interval strings timestamp timestamptz multirangetypes ok 22 + strings 1419 ms ok 23 + md5 954 ms ok 24 + numerology 780 ms ok 25 + point 1036 ms ok 26 + lseg 905 ms ok 27 + line 903 ms ok 28 + box 1031 ms ok 29 + path 807 ms ok 30 + polygon 898 ms ok 31 + circle 803 ms ok 32 + date 723 ms ok 33 + time 799 ms ok 34 + timetz 798 ms ok 35 + timestamp 1445 ms ok 36 + timestamptz 1509 ms ok 37 + interval 1328 ms ok 38 + inet 755 ms ok 39 + macaddr 1098 ms ok 40 + macaddr8 1049 ms ok 41 + multirangetypes 1772 ms # parallel group (12 tests): tstypes comments mvcc horology regex xid geometry misc_sanity expressions unicode type_sanity opr_sanity ok 42 + geometry 870 ms ok 43 + horology 798 ms ok 44 + tstypes 731 ms ok 45 + regex 795 ms ok 46 + type_sanity 1232 ms ok 47 + opr_sanity 1385 ms ok 48 + misc_sanity 859 ms ok 49 + comments 754 ms ok 50 + expressions 929 ms ok 51 + unicode 971 ms ok 52 + xid 830 ms ok 53 + mvcc 747 ms # parallel group (5 tests): copyselect copydml copy insert_conflict insert ok 54 + copy 590 ms ok 55 + copyselect 338 ms ok 56 + copydml 563 ms ok 57 + insert 1635 ms ok 58 + insert_conflict 817 ms # parallel group (7 tests): create_schema create_procedure create_type create_function_c create_operator create_misc create_table ok 59 + create_function_c 511 ms ok 60 + create_misc 709 ms ok 61 + create_operator 681 ms ok 62 + create_procedure 425 ms ok 63 + create_table 1425 ms ok 64 + create_type 422 ms ok 65 + create_schema 420 ms # parallel group (5 tests): create_view index_including index_including_gist create_index_spgist create_index ok 66 + create_index 3493 ms ok 67 + create_index_spgist 1413 ms ok 68 + create_view 982 ms ok 69 + index_including 980 ms ok 70 + index_including_gist 1369 ms # parallel group (16 tests): hash_func errors roleattributes infinite_recurse create_am select create_cast create_aggregate drop_if_exists create_function_sql typed_table constraints vacuum updatable_views inherit triggers ok 71 + create_aggregate 2243 ms ok 72 + create_function_sql 2319 ms ok 73 + create_cast 2083 ms ok 74 + constraints 2782 ms ok 75 + triggers 5220 ms ok 76 + select 1570 ms ok 77 + inherit 4070 ms ok 78 + typed_table 2390 ms ok 79 + vacuum 3690 ms ok 80 + drop_if_exists 2306 ms ok 81 + updatable_views 3704 ms ok 82 + roleattributes 875 ms ok 83 + create_am 967 ms ok 84 + hash_func 780 ms ok 85 + errors 870 ms ok 86 + infinite_recurse 962 ms ok 87 - sanity_check 1267 ms # parallel group (20 tests): random delete namespace select_having select_implicit select_distinct_on select_into case portals prepared_xacts union subselect select_distinct transactions arrays update hash_index join aggregates btree_index ok 88 + select_into 1717 ms ok 89 + select_distinct 2007 ms ok 90 + select_distinct_on 1714 ms ok 91 + select_implicit 1569 ms ok 92 + select_having 1503 ms ok 93 + subselect 1858 ms ok 94 + union 1725 ms ok 95 + case 1706 ms ok 96 + join 3320 ms ok 97 + aggregates 4258 ms ok 98 + transactions 2037 ms ok 99 + random 1353 ms ok 100 + portals 1707 ms ok 101 + arrays 2087 ms ok 102 + btree_index 6176 ms ok 103 + hash_index 3274 ms ok 104 + update 2564 ms ok 105 + delete 1345 ms ok 106 + namespace 1343 ms ok 107 + prepared_xacts 1700 ms # parallel group (20 tests): collate lock init_privs drop_operator tablesample password security_label groupingsets replica_identity gin object_address matview spgist identity gist generated rowsecurity join_hash brin privileges ok 108 + brin 12611 ms ok 109 + gin 3792 ms ok 110 + gist 5031 ms ok 111 + spgist 4462 ms ok 112 + privileges 14818 ms ok 113 + init_privs 781 ms ok 114 + security_label 3513 ms ok 115 + collate 701 ms ok 116 + matview 3803 ms ok 117 + lock 775 ms ok 118 + replica_identity 3703 ms ok 119 + rowsecurity 5314 ms ok 120 + object_address 3770 ms ok 121 + tablesample 3177 ms ok 122 + groupingsets 3501 ms ok 123 + drop_operator 2609 ms ok 124 + password 3498 ms ok 125 + identity 4797 ms ok 126 + generated 5090 ms ok 127 + join_hash 12557 ms # parallel group (2 tests): brin_bloom brin_multi ok 128 + brin_bloom 589 ms ok 129 + brin_multi 2127 ms # parallel group (17 tests): async alter_operator dbsize tidrangescan collate.icu.utf8 tidscan sysviews create_role alter_generic misc_functions without_overlaps tid tsrf misc incremental_sort create_table_like merge ok 130 + create_table_like 1505 ms ok 131 + alter_generic 1193 ms ok 132 + alter_operator 724 ms ok 133 + misc 1313 ms ok 134 + async 650 ms ok 135 + dbsize 720 ms ok 136 + merge 1506 ms ok 137 + misc_functions 1269 ms ok 138 + sysviews 821 ms ok 139 + tsrf 1300 ms ok 140 + tid 1298 ms ok 141 + tidscan 817 ms ok 142 + tidrangescan 711 ms ok 143 + collate.icu.utf8 744 ms ok 144 + incremental_sort 1484 ms ok 145 + create_role 936 ms ok 146 + without_overlaps 1289 ms # parallel group (7 tests): collate.linux.utf8 psql_crosstab amutils collate.windows.win1252 rules psql stats_ext ok 147 + rules 1534 ms ok 148 + psql 1665 ms ok 149 + psql_crosstab 468 ms ok 150 + amutils 682 ms ok 151 + stats_ext 4804 ms ok 152 + collate.linux.utf8 420 ms ok 153 + collate.windows.win1252 958 ms not ok 154 - select_parallel 7448 ms ok 155 - write_parallel 857 ms ok 156 - vacuum_parallel 464 ms # parallel group (2 tests): subscription publication ok 157 + publication 1595 ms ok 158 + subscription 685 ms # parallel group (17 tests): combocid equivclass portals_p2 select_views guc xmlmap tsdicts dependency window advisory_lock functional_deps cluster bitmapops tsearch foreign_data indirect_toast foreign_key ok 159 + select_views 1171 ms ok 160 + portals_p2 983 ms ok 161 + foreign_key 3900 ms ok 162 + cluster 2016 ms ok 163 + dependency 1521 ms ok 164 + guc 1392 ms ok 165 + bitmapops 2026 ms ok 166 + combocid 818 ms ok 167 + tsearch 2065 ms ok 168 + tsdicts 1511 ms ok 169 + foreign_data 2414 ms ok 170 + window 1569 ms ok 171 + xmlmap 1382 ms ok 172 + functional_deps 1937 ms ok 173 + advisory_lock 1700 ms ok 174 + indirect_toast 2407 ms ok 175 + equivclass 823 ms # parallel group (7 tests): jsonpath_encoding json_encoding jsonpath json sqljson jsonb_jsonpath jsonb ok 176 + json 717 ms ok 177 + jsonb 1222 ms ok 178 + json_encoding 598 ms ok 179 + jsonpath 599 ms ok 180 + jsonpath_encoding 563 ms ok 181 + jsonb_jsonpath 772 ms ok 182 + sqljson 747 ms # parallel group (18 tests): limit returning prepare plancache copy2 conversion temp largeobject rowtypes xml sequence rangefuncs with polymorphism domain truncate plpgsql alter_table ok 183 + plancache 1818 ms ok 184 + limit 547 ms ok 185 + plpgsql 4156 ms ok 186 + copy2 1894 ms ok 187 + temp 1974 ms ok 188 + domain 3248 ms ok 189 + rangefuncs 3243 ms ok 190 + prepare 1807 ms ok 191 + conversion 1887 ms ok 192 + truncate 3243 ms ok 193 + alter_table 6994 ms ok 194 + sequence 3112 ms ok 195 + polymorphism 3235 ms ok 196 + rowtypes 2256 ms ok 197 + returning 1452 ms ok 198 + largeobject 2250 ms ok 199 + with 3229 ms ok 200 + xml 2430 ms # parallel group (13 tests): reloptions predicate partition_info hash_part compression explain memoize indexing stats partition_join tuplesort partition_aggregate partition_prune ok 201 + partition_join 3964 ms ok 202 + partition_prune 6590 ms ok 203 + reloptions 943 ms ok 204 + hash_part 2211 ms ok 205 + indexing 3015 ms ok 206 + partition_aggregate 4794 ms ok 207 + partition_info 2207 ms ok 208 + tuplesort 4699 ms ok 209 + explain 2810 ms ok 210 + compression 2672 ms ok 211 + memoize 2815 ms ok 212 + stats 3658 ms ok 213 + predicate 1293 ms # parallel group (2 tests): oidjoins event_trigger ok 214 + oidjoins 685 ms ok 215 + event_trigger 850 ms ok 216 - event_trigger_login 984 ms ok 217 - fast_default 770 ms ok 218 - tablespace 1356 ms 1..218 # 1 of 218 tests failed. # The differences that caused some tests to fail can be viewed in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.diffs". # A copy of the test summary that you see above is saved in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.out". === dumping C:\cirrus\build/testrun/recovery/027_stream_regress\data/regression.diffs === diff -w -U3 C:/cirrus/src/test/regress/expected/select_parallel.out C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/select_parallel.out --- C:/cirrus/src/test/regress/expected/select_parallel.out 2024-02-29 14:46:18.141824500 +0000 +++ C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/select_parallel.out 2024-02-29 14:51:41.124339400 +0000 @@ -452,25 +452,35 @@ where tenk1.four = t.four ); QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - Seq Scan on public.tenk1 t +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Hash Join Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 - Filter: (SubPlan 1) - SubPlan 1 - -> Hash Join - Output: t.two - Hash Cond: (tenk1.stringu1 = t3.stringu1) - -> Seq Scan on public.tenk1 - Output: tenk1.unique1, tenk1.unique2, tenk1.two, tenk1.four, tenk1.ten, tenk1.twenty, tenk1.hundred, tenk1.thousand, tenk1.twothousand, tenk1.fivethous, tenk1.tenthous, tenk1.odd, tenk1.even, tenk1.stringu1, tenk1.stringu2, tenk1.string4 - Filter: (tenk1.four = t.four) + Inner Unique: true + Hash Cond: (t.four = tenk1.four) + -> Gather + Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 + Workers Planned: 4 + -> Parallel Seq Scan on public.tenk1 t + Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 + Filter: (t.two IS NOT NULL) -> Hash - Output: t3.stringu1 + Output: tenk1.four + -> HashAggregate + Output: tenk1.four + Group Key: tenk1.four -> Gather - Output: t3.stringu1 + Output: tenk1.four Workers Planned: 4 + -> Parallel Hash Join + Output: tenk1.four + Hash Cond: (tenk1.stringu1 = t3.stringu1) + -> Parallel Seq Scan on public.tenk1 + Output: tenk1.unique1, tenk1.unique2, tenk1.two, tenk1.four, tenk1.ten, tenk1.twenty, tenk1.hundred, tenk1.thousand, tenk1.twothousand, tenk1.fivethous, tenk1.tenthous, tenk1.odd, tenk1.even, tenk1.stringu1, tenk1.stringu2, tenk1.string4 + -> Parallel Hash + Output: t3.stringu1 -> Parallel Seq Scan on public.tenk1 t3 Output: t3.stringu1 -(17 rows) +(27 rows) -- this is not parallel-safe due to use of random() within SubLink's testexpr: explain (costs off) === EOF === [14:52:18.302](122.667s) not ok 2 - regression tests pass [14:52:18.302](0.000s) # Failed test 'regression tests pass' # at C:/cirrus/src/test/recovery/t/027_stream_regress.pl line 95. [14:52:18.302](0.000s) # got: '256' # expected: '0' 1 1 1 1 2 1 1 9 5 5 4001 3 4 3 4 4 1 32 1 1 1 6 104 2 1 5 41 1006 1 2 1 5 17 -2 33 34 9 1 1 1 1 1 -1 1 1 -1 -32768 32767 46 1 Waiting for replication conn standby_1's replay_lsn to pass 0/143EA0D0 on primary done # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump --no-sync -p 55774 --no-unlogged-table-data [14:52:23.033](4.730s) ok 3 - dump primary server # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump --no-sync -p 55775 [14:52:27.837](4.805s) ok 4 - dump standby server # Running: diff C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump [14:52:28.115](0.278s) ok 5 - compare primary and standby dumps [14:52:28.806](0.691s) ok 6 - check contents of pg_stat_statements on regression database ### Stopping node "standby_1" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "standby_1" ### Stopping node "primary" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "primary" [14:52:29.682](0.876s) 1..6 [14:52:29.711](0.028s) # Looks like you failed 1 test of 6.