# Checking port 60830 # Found port 60830 Name: primary Data directory: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_primary_data/pgdata Backup directory: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_primary_data/backup Archive directory: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_primary_data/archives Connection string: port=60830 host=/tmp/hTJO2uM9I1 Log file: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/log/027_stream_regress_primary.log [02:09:11.160](0.017s) # initializing database system by copying initdb template # Running: cp -RPp /tmp/cirrus-ci-build/build/tmp_install/initdb-template /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_primary_data/pgdata # Running: /tmp/cirrus-ci-build/build/src/test/regress/pg_regress --config-auth /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_primary_data/pgdata ### Starting node "primary" # Running: pg_ctl -w -D /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_primary_data/pgdata -l /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/log/027_stream_regress_primary.log -o --cluster-name=primary start waiting for server to start.... done server started # Postmaster PID for node "primary" is 11377 (standby_1,) [02:09:11.453](0.293s) ok 1 - physical slot created on primary # Taking pg_basebackup my_backup from node "primary" # Running: pg_basebackup -D /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_primary_data/backup/my_backup -h /tmp/hTJO2uM9I1 -p 60830 --checkpoint fast --no-sync # Backup finished # Checking port 60831 # Found port 60831 Name: standby_1 Data directory: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_standby_1_data/pgdata Backup directory: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_standby_1_data/backup Archive directory: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_standby_1_data/archives Connection string: port=60831 host=/tmp/hTJO2uM9I1 Log file: /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/log/027_stream_regress_standby_1.log # Initializing node "standby_1" from backup "my_backup" of node "primary" ### Enabling streaming replication for node "standby_1" ### Starting node "standby_1" # Running: pg_ctl -w -D /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_standby_1_data/pgdata -l /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/log/027_stream_regress_standby_1.log -o --cluster-name=standby_1 start waiting for server to start.... done server started # Postmaster PID for node "standby_1" is 11453 # using postmaster on /tmp/hTJO2uM9I1, port 60830 ok 1 - test_setup 421 ms # parallel group (20 tests): varchar pg_lsn name char txid int2 oid uuid int4 text int8 boolean money regproc float4 bit float8 enum rangetypes numeric ok 2 + boolean 306 ms ok 3 + char 190 ms ok 4 + name 180 ms ok 5 + varchar 129 ms ok 6 + text 261 ms ok 7 + int2 239 ms ok 8 + int4 259 ms ok 9 + int8 288 ms ok 10 + oid 240 ms ok 11 + float4 339 ms ok 12 + float8 424 ms ok 13 + bit 397 ms ok 14 + numeric 1684 ms ok 15 + txid 216 ms ok 16 + uuid 252 ms ok 17 + enum 651 ms ok 18 + money 306 ms ok 19 + rangetypes 996 ms ok 20 + pg_lsn 128 ms ok 21 + regproc 317 ms # parallel group (20 tests): md5 lseg path circle time line point macaddr timetz numerology macaddr8 inet date polygon timestamp box timestamptz strings interval multirangetypes ok 22 + strings 2383 ms ok 23 + md5 403 ms ok 24 + numerology 1069 ms ok 25 + point 653 ms ok 26 + lseg 419 ms ok 27 + line 615 ms ok 28 + box 1652 ms ok 29 + path 487 ms ok 30 + polygon 1479 ms ok 31 + circle 524 ms ok 32 + date 1431 ms ok 33 + time 567 ms ok 34 + timetz 755 ms ok 35 + timestamp 1540 ms ok 36 + timestamptz 1763 ms ok 37 + interval 2479 ms ok 38 + inet 1244 ms ok 39 + macaddr 700 ms ok 40 + macaddr8 1157 ms ok 41 + multirangetypes 3185 ms # parallel group (12 tests): comments misc_sanity unicode mvcc xid expressions type_sanity geometry tstypes horology regex opr_sanity ok 42 + geometry 459 ms ok 43 + horology 480 ms ok 44 + tstypes 480 ms ok 45 + regex 517 ms ok 46 + type_sanity 367 ms ok 47 + opr_sanity 823 ms ok 48 + misc_sanity 85 ms ok 49 + comments 51 ms ok 50 + expressions 336 ms ok 51 + unicode 122 ms ok 52 + xid 319 ms ok 53 + mvcc 212 ms # parallel group (5 tests): copyselect copydml copy insert_conflict insert ok 54 + copy 478 ms ok 55 + copyselect 176 ms ok 56 + copydml 331 ms ok 57 + insert 2779 ms ok 58 + insert_conflict 1186 ms # parallel group (7 tests): create_function_c create_operator create_schema create_misc create_type create_procedure create_table ok 59 + create_function_c 37 ms ok 60 + create_misc 237 ms ok 61 + create_operator 119 ms ok 62 + create_procedure 267 ms ok 63 + create_table 1775 ms ok 64 + create_type 256 ms ok 65 + create_schema 230 ms # parallel group (5 tests): index_including_gist index_including create_index_spgist create_view create_index ok 66 + create_index 5245 ms ok 67 + create_index_spgist 1834 ms ok 68 + create_view 2235 ms ok 69 + index_including 1395 ms ok 70 + index_including_gist 998 ms # parallel group (16 tests): create_cast hash_func errors infinite_recurse select roleattributes typed_table create_aggregate drop_if_exists create_function_sql create_am vacuum constraints updatable_views inherit triggers ok 71 + create_aggregate 1007 ms ok 72 + create_function_sql 1264 ms ok 73 + create_cast 345 ms ok 74 + constraints 4256 ms ok 75 + triggers 15820 ms ok 76 + select 735 ms ok 77 + inherit 9818 ms ok 78 + typed_table 959 ms ok 79 + vacuum 2085 ms ok 80 + drop_if_exists 1147 ms ok 81 + updatable_views 6753 ms ok 82 + roleattributes 910 ms ok 83 + create_am 1286 ms ok 84 + hash_func 359 ms ok 85 + errors 391 ms ok 86 + infinite_recurse 698 ms ok 87 - sanity_check 293 ms # parallel group (20 tests): select_distinct_on select_having delete random select_implicit case namespace select_distinct prepared_xacts select_into union arrays portals transactions subselect hash_index update aggregates join btree_index ok 88 + select_into 602 ms ok 89 + select_distinct 496 ms ok 90 + select_distinct_on 99 ms ok 91 + select_implicit 353 ms ok 92 + select_having 232 ms ok 93 + subselect 1363 ms ok 94 + union 931 ms ok 95 + case 401 ms ok 96 + join 4070 ms ok 97 + aggregates 2883 ms ok 98 + transactions 1305 ms ok 99 + random 324 ms ok 100 + portals 1114 ms ok 101 + arrays 1109 ms ok 102 + btree_index 4247 ms ok 103 + hash_index 2084 ms ok 104 + update 2114 ms ok 105 + delete 248 ms ok 106 + namespace 424 ms ok 107 + prepared_xacts 492 ms # parallel group (20 tests): init_privs drop_operator security_label tablesample password lock object_address replica_identity collate groupingsets matview identity gin spgist gist join_hash brin generated privileges rowsecurity ok 108 + brin 6066 ms ok 109 + gin 4136 ms ok 110 + gist 4903 ms ok 111 + spgist 4252 ms ok 112 + privileges 7941 ms ok 113 + init_privs 145 ms ok 114 + security_label 818 ms ok 115 + collate 2091 ms ok 116 + matview 3091 ms ok 117 + lock 1347 ms ok 118 + replica_identity 2013 ms ok 119 + rowsecurity 8679 ms ok 120 + object_address 1442 ms ok 121 + tablesample 832 ms ok 122 + groupingsets 2132 ms ok 123 + drop_operator 451 ms ok 124 + password 883 ms ok 125 + identity 4056 ms ok 126 + generated 6764 ms ok 127 + join_hash 5896 ms # parallel group (2 tests): brin_bloom brin_multi ok 128 + brin_bloom 295 ms ok 129 + brin_multi 1138 ms # parallel group (18 tests): async dbsize collate.utf8 sysviews tidrangescan tidscan tsrf tid alter_operator misc misc_functions incremental_sort create_role alter_generic merge create_table_like collate.icu.utf8 without_overlaps ok 130 + create_table_like 2623 ms ok 131 + alter_generic 1423 ms ok 132 + alter_operator 600 ms ok 133 + misc 681 ms ok 134 + async 64 ms ok 135 + dbsize 120 ms ok 136 + merge 2391 ms ok 137 + misc_functions 763 ms ok 138 + sysviews 290 ms ok 139 + tsrf 505 ms ok 140 + tid 520 ms ok 141 + tidscan 322 ms ok 142 + tidrangescan 307 ms ok 143 + collate.utf8 212 ms ok 144 + collate.icu.utf8 2951 ms ok 145 + incremental_sort 816 ms ok 146 + create_role 1233 ms ok 147 + without_overlaps 3574 ms # parallel group (7 tests): collate.windows.win1252 collate.linux.utf8 psql_crosstab amutils rules psql stats_ext ok 148 + rules 2235 ms ok 149 + psql 2272 ms ok 150 + psql_crosstab 61 ms ok 151 + amutils 67 ms ok 152 + stats_ext 5054 ms ok 153 + collate.linux.utf8 40 ms ok 154 + collate.windows.win1252 33 ms not ok 155 - select_parallel 3042 ms ok 156 - write_parallel 363 ms ok 157 - vacuum_parallel 267 ms # parallel group (2 tests): subscription publication ok 158 + publication 2237 ms ok 159 + subscription 201 ms # parallel group (17 tests): advisory_lock portals_p2 xmlmap combocid functional_deps tsdicts select_views guc dependency equivclass bitmapops indirect_toast window cluster tsearch foreign_data foreign_key ok 160 + select_views 691 ms ok 161 + portals_p2 280 ms ok 162 + foreign_key 8600 ms ok 163 + cluster 2086 ms ok 164 + dependency 1019 ms ok 165 + guc 726 ms ok 166 + bitmapops 1046 ms ok 167 + combocid 420 ms ok 168 + tsearch 2268 ms ok 169 + tsdicts 689 ms ok 170 + foreign_data 5335 ms ok 171 + window 1954 ms ok 172 + xmlmap 419 ms ok 173 + functional_deps 515 ms ok 174 + advisory_lock 254 ms ok 175 + indirect_toast 1090 ms ok 176 + equivclass 1023 ms # parallel group (8 tests): jsonpath_encoding json_encoding sqljson jsonpath sqljson_queryfuncs jsonb_jsonpath json jsonb ok 177 + json 488 ms ok 178 + jsonb 736 ms ok 179 + json_encoding 83 ms ok 180 + jsonpath 264 ms ok 181 + jsonpath_encoding 34 ms ok 182 + jsonb_jsonpath 471 ms ok 183 + sqljson 237 ms ok 184 + sqljson_queryfuncs 270 ms # parallel group (18 tests): prepare returning conversion plancache limit temp largeobject with sequence copy2 rowtypes truncate xml polymorphism rangefuncs domain plpgsql alter_table ok 185 + plancache 524 ms ok 186 + limit 576 ms ok 187 + plpgsql 6264 ms ok 188 + copy2 1363 ms ok 189 + temp 881 ms ok 190 + domain 1996 ms ok 191 + rangefuncs 1746 ms ok 192 + prepare 215 ms ok 193 + conversion 487 ms ok 194 + truncate 1567 ms ok 195 + alter_table 8464 ms ok 196 + sequence 1314 ms ok 197 + polymorphism 1708 ms ok 198 + rowtypes 1487 ms ok 199 + returning 405 ms ok 200 + largeobject 1187 ms ok 201 + with 1242 ms ok 202 + xml 1588 ms # parallel group (13 tests): predicate hash_part reloptions partition_info memoize explain compression partition_aggregate stats tuplesort partition_prune partition_join indexing ok 203 + partition_join 4379 ms ok 204 + partition_prune 4204 ms ok 205 + reloptions 567 ms ok 206 + hash_part 258 ms ok 207 + indexing 4919 ms ok 208 + partition_aggregate 2241 ms ok 209 + partition_info 651 ms ok 210 + tuplesort 3053 ms ok 211 + explain 712 ms ok 212 + compression 1445 ms ok 213 + memoize 656 ms ok 214 + stats 2329 ms ok 215 + predicate 113 ms # parallel group (2 tests): oidjoins event_trigger ok 216 + oidjoins 519 ms ok 217 + event_trigger 591 ms ok 218 - event_trigger_login 67 ms ok 219 - fast_default 460 ms ok 220 - tablespace 982 ms 1..220 # 1 of 220 tests failed. # The differences that caused some tests to fail can be viewed in the file "/tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/regression.diffs". # A copy of the test summary that you see above is saved in the file "/tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/regression.out". === dumping /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/regression.diffs === diff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/select_parallel.out /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/results/select_parallel.out --- /tmp/cirrus-ci-build/src/test/regress/expected/select_parallel.out 2024-03-29 02:07:28.796752000 +0000 +++ /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/results/select_parallel.out 2024-03-29 02:10:10.900380000 +0000 @@ -451,26 +451,36 @@ join tenk1 t3 on t3.stringu1 = tenk1.stringu1 where tenk1.four = t.four ); - QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - Seq Scan on public.tenk1 t + QUERY PLAN +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Hash Join Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 - Filter: (SubPlan 1) - SubPlan 1 - -> Hash Join - Output: t.two - Hash Cond: (tenk1.stringu1 = t3.stringu1) - -> Seq Scan on public.tenk1 - Output: tenk1.unique1, tenk1.unique2, tenk1.two, tenk1.four, tenk1.ten, tenk1.twenty, tenk1.hundred, tenk1.thousand, tenk1.twothousand, tenk1.fivethous, tenk1.tenthous, tenk1.odd, tenk1.even, tenk1.stringu1, tenk1.stringu2, tenk1.string4 - Filter: (tenk1.four = t.four) - -> Hash - Output: t3.stringu1 - -> Gather - Output: t3.stringu1 - Workers Planned: 4 - -> Parallel Seq Scan on public.tenk1 t3 - Output: t3.stringu1 -(17 rows) + Inner Unique: true + Hash Cond: (t.four = tenk1.four) + -> Gather + Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 + Workers Planned: 4 + -> Parallel Seq Scan on public.tenk1 t + Output: t.unique1, t.unique2, t.two, t.four, t.ten, t.twenty, t.hundred, t.thousand, t.twothousand, t.fivethous, t.tenthous, t.odd, t.even, t.stringu1, t.stringu2, t.string4 + Filter: (t.two IS NOT NULL) + -> Hash + Output: tenk1.four + -> HashAggregate + Output: tenk1.four + Group Key: tenk1.four + -> Gather + Output: tenk1.four + Workers Planned: 4 + -> Parallel Hash Join + Output: tenk1.four + Hash Cond: (tenk1.stringu1 = t3.stringu1) + -> Parallel Seq Scan on public.tenk1 + Output: tenk1.unique1, tenk1.unique2, tenk1.two, tenk1.four, tenk1.ten, tenk1.twenty, tenk1.hundred, tenk1.thousand, tenk1.twothousand, tenk1.fivethous, tenk1.tenthous, tenk1.odd, tenk1.even, tenk1.stringu1, tenk1.stringu2, tenk1.string4 + -> Parallel Hash + Output: t3.stringu1 + -> Parallel Seq Scan on public.tenk1 t3 + Output: t3.stringu1 +(27 rows) -- this is not parallel-safe due to use of random() within SubLink's testexpr: explain (costs off) === EOF === [02:10:38.931](87.478s) not ok 2 - regression tests pass [02:10:38.931](0.000s) # Failed test 'regression tests pass' # at /tmp/cirrus-ci-build/src/test/recovery/t/027_stream_regress.pl line 95. [02:10:38.931](0.000s) # got: '256' # expected: '0' 1 1 1 2 1 1 1 9 5 5 3 4 3 4 4 1 32 1 1 1 4001 6 104 2 1 5 1006 1 2 41 5 17 -2 33 34 9 1 1 1 1 1 1 1 -1 1 1 -1 -32768 32767 46 Waiting for replication conn standby_1's replay_lsn to pass 0/1449BCB0 on primary done # Running: pg_dumpall -f /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/primary.dump --no-sync -p 60830 --no-unlogged-table-data [02:10:41.761](2.830s) ok 3 - dump primary server # Running: pg_dumpall -f /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/standby.dump --no-sync -p 60831 [02:10:45.025](3.264s) ok 4 - dump standby server # Running: diff /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/primary.dump /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/standby.dump [02:10:45.041](0.017s) ok 5 - compare primary and standby dumps [02:10:45.261](0.219s) ok 6 - check contents of pg_stat_statements on regression database ### Stopping node "standby_1" using mode fast # Running: pg_ctl -D /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_standby_1_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "standby_1" ### Stopping node "primary" using mode fast # Running: pg_ctl -D /tmp/cirrus-ci-build/build/testrun/recovery/027_stream_regress/data/t_027_stream_regress_primary_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "primary" [02:10:45.486](0.225s) 1..6 [02:10:45.490](0.005s) # Looks like you failed 1 test of 6.