# Checking port 54863 # Found port 54863 Name: primary Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/archives Connection string: port=54863 host=C:/Windows/TEMP/lN1tJmWk1p Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log [17:09:36.774](0.057s) # initializing database system by copying initdb template # Running: robocopy /E /NJS /NJH /NFL /NDL /NP C:/cirrus/build/tmp_install/initdb-template C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata # Running: C:\cirrus\build\src/test\regress\pg_regress.exe --config-auth C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata ### Starting node "primary" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_primary.log -o --cluster-name=primary start waiting for server to start.... done server started # Postmaster PID for node "primary" is 7020 (standby_1,) [17:09:38.486](1.712s) ok 1 - physical slot created on primary # Taking pg_basebackup my_backup from node "primary" # Running: pg_basebackup -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/backup/my_backup -h C:/Windows/TEMP/lN1tJmWk1p -p 54863 --checkpoint fast --no-sync # Backup finished # Checking port 54864 # Found port 54864 Name: standby_1 Data directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata Backup directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/backup Archive directory: C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/archives Connection string: port=54864 host=C:/Windows/TEMP/lN1tJmWk1p Log file: C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log # Initializing node "standby_1" from backup "my_backup" of node "primary" ### Enabling streaming replication for node "standby_1" ### Starting node "standby_1" # Running: pg_ctl -w -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -l C:\cirrus\build/testrun/recovery/027_stream_regress\log/027_stream_regress_standby_1.log -o --cluster-name=standby_1 start waiting for server to start.... done server started # Postmaster PID for node "standby_1" is 1056 # using postmaster on C:/Windows/TEMP/lN1tJmWk1p, port 54863 ok 1 - test_setup 888 ms # parallel group (20 tests): name oid char boolean varchar bit txid regproc float4 uuid int2 money pg_lsn text int4 int8 float8 enum rangetypes numeric ok 2 + boolean 746 ms ok 3 + char 745 ms ok 4 + name 708 ms ok 5 + varchar 743 ms ok 6 + text 964 ms ok 7 + int2 962 ms ok 8 + int4 1095 ms ok 9 + int8 1140 ms ok 10 + oid 699 ms ok 11 + float4 924 ms ok 12 + float8 1136 ms ok 13 + bit 731 ms ok 14 + numeric 1944 ms ok 15 + txid 826 ms ok 16 + uuid 917 ms ok 17 + enum 1147 ms ok 18 + money 945 ms ok 19 + rangetypes 1809 ms ok 20 + pg_lsn 943 ms ok 21 + regproc 909 ms # parallel group (20 tests): point line path numerology lseg macaddr md5 circle macaddr8 strings inet box timetz time interval date timestamptz timestamp multirangetypes polygon ok 22 + strings 1041 ms ok 23 + md5 973 ms ok 24 + numerology 778 ms ok 25 + point 648 ms ok 26 + lseg 959 ms ok 27 + line 645 ms ok 28 + box 1193 ms ok 29 + path 641 ms ok 30 + polygon 2133 ms ok 31 + circle 959 ms ok 32 + date 1296 ms ok 33 + time 1272 ms ok 34 + timetz 1215 ms ok 35 + timestamp 1480 ms ok 36 + timestamptz 1406 ms ok 37 + interval 1266 ms ok 38 + inet 1178 ms ok 39 + macaddr 938 ms ok 40 + macaddr8 1010 ms ok 41 + multirangetypes 1503 ms # parallel group (12 tests): tstypes xid mvcc comments unicode misc_sanity type_sanity horology regex expressions geometry opr_sanity ok 42 + geometry 1060 ms ok 43 + horology 992 ms ok 44 + tstypes 445 ms ok 45 + regex 1009 ms ok 46 + type_sanity 665 ms ok 47 + opr_sanity 1162 ms ok 48 + misc_sanity 662 ms ok 49 + comments 584 ms ok 50 + expressions 1011 ms ok 51 + unicode 654 ms ok 52 + xid 570 ms ok 53 + mvcc 576 ms # parallel group (5 tests): copyselect copydml copy insert_conflict insert ok 54 + copy 785 ms ok 55 + copyselect 297 ms ok 56 + copydml 607 ms ok 57 + insert 1604 ms ok 58 + insert_conflict 810 ms # parallel group (7 tests): create_operator create_schema create_function_c create_misc create_type create_procedure create_table ok 59 + create_function_c 328 ms ok 60 + create_misc 336 ms ok 61 + create_operator 272 ms ok 62 + create_procedure 413 ms ok 63 + create_table 1411 ms ok 64 + create_type 336 ms ok 65 + create_schema 312 ms # parallel group (5 tests): index_including create_view index_including_gist create_index_spgist create_index ok 66 + create_index 3033 ms ok 67 + create_index_spgist 1544 ms ok 68 + create_view 1077 ms ok 69 + index_including 1068 ms ok 70 + index_including_gist 1309 ms # parallel group (16 tests): infinite_recurse hash_func create_cast select typed_table roleattributes create_function_sql create_am create_aggregate constraints errors drop_if_exists vacuum updatable_views inherit triggers ok 71 + create_aggregate 1611 ms ok 72 + create_function_sql 1455 ms ok 73 + create_cast 885 ms ok 74 + constraints 1735 ms ok 75 + triggers 5648 ms ok 76 + select 881 ms ok 77 + inherit 3893 ms ok 78 + typed_table 1285 ms ok 79 + vacuum 3217 ms ok 80 + drop_if_exists 1937 ms ok 81 + updatable_views 3353 ms ok 82 + roleattributes 1350 ms ok 83 + create_am 1442 ms ok 84 + hash_func 870 ms ok 85 + errors 1720 ms ok 86 + infinite_recurse 647 ms ok 87 - sanity_check 786 ms # parallel group (20 tests): select_distinct_on case select_implicit random delete select_having prepared_xacts select_into union namespace subselect transactions select_distinct portals arrays update join hash_index aggregates btree_index ok 88 + select_into 1898 ms ok 89 + select_distinct 2548 ms ok 90 + select_distinct_on 588 ms ok 91 + select_implicit 1116 ms ok 92 + select_having 1794 ms ok 93 + subselect 1968 ms ok 94 + union 1887 ms ok 95 + case 1110 ms ok 96 + join 3781 ms ok 97 + aggregates 5423 ms ok 98 + transactions 1960 ms ok 99 + random 1742 ms ok 100 + portals 2530 ms ok 101 + arrays 3324 ms ok 102 + btree_index 7817 ms ok 103 + hash_index 3771 ms ok 104 + update 3769 ms ok 105 + delete 1733 ms ok 106 + namespace 1869 ms ok 107 + prepared_xacts 1867 ms # parallel group (20 tests): drop_operator lock init_privs password tablesample security_label spgist replica_identity collate groupingsets object_address identity matview generated gin rowsecurity gist join_hash brin privileges ok 108 + brin 10631 ms ok 109 + gin 4536 ms ok 110 + gist 5237 ms ok 111 + spgist 2619 ms ok 112 + privileges 11979 ms ok 113 + init_privs 1368 ms ok 114 + security_label 2562 ms ok 115 + collate 3851 ms ok 116 + matview 4513 ms ok 117 + lock 1315 ms ok 118 + replica_identity 3432 ms ok 119 + rowsecurity 5012 ms ok 120 + object_address 3844 ms ok 121 + tablesample 2523 ms ok 122 + groupingsets 3841 ms ok 123 + drop_operator 929 ms ok 124 + password 1385 ms ok 125 + identity 4334 ms ok 126 + generated 4497 ms ok 127 + join_hash 10601 ms # parallel group (2 tests): brin_bloom brin_multi ok 128 + brin_bloom 467 ms ok 129 + brin_multi 1829 ms # parallel group (17 tests): collate.icu.utf8 tid tidrangescan alter_operator misc_functions create_role misc async alter_generic sysviews tsrf merge without_overlaps dbsize tidscan incremental_sort create_table_like ok 130 + create_table_like 1545 ms ok 131 + alter_generic 1195 ms ok 132 + alter_operator 927 ms ok 133 + misc 1176 ms ok 134 + async 1190 ms ok 135 + dbsize 1244 ms ok 136 + merge 1205 ms ok 137 + misc_functions 1139 ms ok 138 + sysviews 1192 ms ok 139 + tsrf 1190 ms ok 140 + tid 761 ms ok 141 + tidscan 1269 ms ok 142 + tidrangescan 819 ms ok 143 + collate.icu.utf8 756 ms ok 144 + incremental_sort 1335 ms ok 145 + create_role 1126 ms ok 146 + without_overlaps 1189 ms # parallel group (7 tests): amutils collate.linux.utf8 psql_crosstab collate.windows.win1252 rules psql stats_ext ok 147 + rules 1887 ms ok 148 + psql 1934 ms ok 149 + psql_crosstab 552 ms ok 150 + amutils 388 ms ok 151 + stats_ext 5104 ms ok 152 + collate.linux.utf8 524 ms ok 153 + collate.windows.win1252 663 ms not ok 154 - select_parallel 7696 ms ok 155 - write_parallel 862 ms ok 156 - vacuum_parallel 500 ms # parallel group (2 tests): subscription publication ok 157 + publication 1409 ms ok 158 + subscription 415 ms # parallel group (17 tests): advisory_lock portals_p2 combocid equivclass tsdicts dependency select_views guc functional_deps xmlmap tsearch cluster indirect_toast bitmapops window foreign_data foreign_key ok 159 + select_views 2008 ms ok 160 + portals_p2 772 ms ok 161 + foreign_key 5620 ms ok 162 + cluster 2467 ms ok 163 + dependency 2001 ms ok 164 + guc 2000 ms ok 165 + bitmapops 2818 ms ok 166 + combocid 1165 ms ok 167 + tsearch 2402 ms ok 168 + tsdicts 1757 ms ok 169 + foreign_data 2973 ms ok 170 + window 2868 ms ok 171 + xmlmap 2061 ms ok 172 + functional_deps 1987 ms ok 173 + advisory_lock 751 ms ok 174 + indirect_toast 2704 ms ok 175 + equivclass 1150 ms # parallel group (7 tests): json_encoding jsonpath_encoding jsonpath sqljson json jsonb_jsonpath jsonb ok 176 + json 665 ms ok 177 + jsonb 1116 ms ok 178 + json_encoding 290 ms ok 179 + jsonpath 289 ms ok 180 + jsonpath_encoding 287 ms ok 181 + jsonb_jsonpath 705 ms ok 182 + sqljson 308 ms # parallel group (18 tests): limit prepare conversion rowtypes xml with truncate plancache returning largeobject copy2 polymorphism sequence rangefuncs domain temp plpgsql alter_table ok 183 + plancache 2369 ms ok 184 + limit 692 ms ok 185 + plpgsql 5282 ms ok 186 + copy2 2792 ms ok 187 + temp 2993 ms ok 188 + domain 2969 ms ok 189 + rangefuncs 2967 ms ok 190 + prepare 1575 ms ok 191 + conversion 1668 ms ok 192 + truncate 2056 ms ok 193 + alter_table 7605 ms ok 194 + sequence 2931 ms ok 195 + polymorphism 2778 ms ok 196 + rowtypes 2035 ms ok 197 + returning 2428 ms ok 198 + largeobject 2774 ms ok 199 + with 2045 ms ok 200 + xml 2043 ms # parallel group (13 tests): predicate hash_part partition_info compression reloptions explain memoize indexing tuplesort partition_join stats partition_aggregate partition_prune ok 201 + partition_join 4800 ms ok 202 + partition_prune 7950 ms ok 203 + reloptions 2255 ms ok 204 + hash_part 1604 ms ok 205 + indexing 3030 ms ok 206 + partition_aggregate 6508 ms ok 207 + partition_info 1599 ms ok 208 + tuplesort 4788 ms ok 209 + explain 2734 ms ok 210 + compression 1834 ms ok 211 + memoize 2770 ms ok 212 + stats 5246 ms ok 213 + predicate 1464 ms # parallel group (2 tests): oidjoins event_trigger ok 214 + oidjoins 820 ms ok 215 + event_trigger 943 ms ok 216 - event_trigger_login 372 ms ok 217 - fast_default 566 ms ok 218 - tablespace 2117 ms 1..218 # 1 of 218 tests failed. # The differences that caused some tests to fail can be viewed in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.diffs". # A copy of the test summary that you see above is saved in the file "C:/cirrus/build/testrun/recovery/027_stream_regress/data/regression.out". === dumping C:\cirrus\build/testrun/recovery/027_stream_regress\data/regression.diffs === diff -w -U3 C:/cirrus/src/test/regress/expected/select_parallel.out C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/select_parallel.out --- C:/cirrus/src/test/regress/expected/select_parallel.out 2024-03-04 17:05:24.942903000 +0000 +++ C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/select_parallel.out 2024-03-04 17:11:04.339372100 +0000 @@ -452,25 +452,35 @@ where tenk1.four = t.four ); QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - Seq Scan on public.tenk1 t +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Hash Join Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 - Filter: (SubPlan 1) - SubPlan 1 - -> Hash Join - Output: t.two - Hash Cond: (tenk1.stringu1 = t3.stringu1) - -> Seq Scan on public.tenk1 - Output: tenk1.unique1, tenk1.unique2, tenk1.two, tenk1.four, tenk1.ten, tenk1.twenty, tenk1.hundred, tenk1.thousand, tenk1.twothousand, tenk1.fivethous, tenk1.tenthous, tenk1.odd, tenk1.even, tenk1.stringu1, tenk1.stringu2, tenk1.string4 - Filter: (tenk1.four = t.four) + Inner Unique: true + Hash Cond: (t.four = tenk1.four) + -> Gather + Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 + Workers Planned: 4 + -> Parallel Seq Scan on public.tenk1 t + Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 + Filter: (t.two IS NOT NULL) -> Hash - Output: t3.stringu1 + Output: tenk1.four + -> HashAggregate + Output: tenk1.four + Group Key: tenk1.four -> Gather - Output: t3.stringu1 + Output: tenk1.four Workers Planned: 4 + -> Parallel Hash Join + Output: tenk1.four + Hash Cond: (tenk1.stringu1 = t3.stringu1) + -> Parallel Seq Scan on public.tenk1 + Output: tenk1.unique1, tenk1.unique2, tenk1.two, tenk1.four, tenk1.ten, tenk1.twenty, tenk1.hundred, tenk1.thousand, tenk1.twothousand, tenk1.fivethous, tenk1.tenthous, tenk1.odd, tenk1.even, tenk1.stringu1, tenk1.stringu2, tenk1.string4 + -> Parallel Hash + Output: t3.stringu1 -> Parallel Seq Scan on public.tenk1 t3 Output: t3.stringu1 -(17 rows) +(27 rows) -- this is not parallel-safe due to use of random() within SubLink's testexpr: explain (costs off) === EOF === [17:11:46.115](127.629s) not ok 2 - regression tests pass [17:11:46.115](0.000s) # Failed test 'regression tests pass' # at C:/cirrus/src/test/recovery/t/027_stream_regress.pl line 95. [17:11:46.115](0.001s) # got: '256' # expected: '0' 1 1 1 2 1 1 9 1 4001 5 5 41 3 4 3 4 4 1 32 1 1 1 6 104 2 1 5 1006 1 2 1 5 17 33 34 -2 9 46 1 1 1 1 -1 1 1 -1 -32768 32767 1 1 Waiting for replication conn standby_1's replay_lsn to pass 0/14303848 on primary done # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump --no-sync -p 54863 --no-unlogged-table-data [17:11:51.578](5.463s) ok 3 - dump primary server # Running: pg_dumpall -f C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump --no-sync -p 54864 [17:11:56.402](4.824s) ok 4 - dump standby server # Running: diff C:\cirrus\build/testrun/recovery/027_stream_regress\data/primary.dump C:\cirrus\build/testrun/recovery/027_stream_regress\data/standby.dump [17:11:56.555](0.153s) ok 5 - compare primary and standby dumps [17:11:57.607](1.052s) ok 6 - check contents of pg_stat_statements on regression database ### Stopping node "standby_1" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_standby_1_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "standby_1" ### Stopping node "primary" using mode fast # Running: pg_ctl -D C:\cirrus\build/testrun/recovery/027_stream_regress\data/t_027_stream_regress_primary_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "primary" [17:11:58.201](0.594s) 1..6 [17:11:58.210](0.009s) # Looks like you failed 1 test of 6.