# Checking port 60036 # Found port 60036 Name: primary Data directory: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_primary_data/pgdata Backup directory: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_primary_data/backup Archive directory: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_primary_data/archives Connection string: port=60036 host=/var/folders/n8/pp_chp6x3jlfy1n94r6hdjv40000gn/T/vs7pn9WM1a Log file: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_primary.log # Checking port 60037 # Found port 60037 Name: standby Data directory: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata Backup directory: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/backup Archive directory: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/archives Connection string: port=60037 host=/var/folders/n8/pp_chp6x3jlfy1n94r6hdjv40000gn/T/vs7pn9WM1a Log file: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_standby.log # Checking port 60038 # Found port 60038 Name: cascading_standby Data directory: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_cascading_standby_data/pgdata Backup directory: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_cascading_standby_data/backup Archive directory: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_cascading_standby_data/archives Connection string: port=60038 host=/var/folders/n8/pp_chp6x3jlfy1n94r6hdjv40000gn/T/vs7pn9WM1a Log file: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_cascading_standby.log # Checking port 60039 # Found port 60039 Name: subscriber Data directory: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_subscriber_data/pgdata Backup directory: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_subscriber_data/backup Archive directory: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_subscriber_data/archives Connection string: port=60039 host=/var/folders/n8/pp_chp6x3jlfy1n94r6hdjv40000gn/T/vs7pn9WM1a Log file: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_subscriber.log [22:44:49.619](0.187s) # initializing database system by copying initdb template # Running: cp -RPp /Users/admin/pgsql/build/tmp_install/initdb-template /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_primary_data/pgdata # Running: /Users/admin/pgsql/build/src/test/regress/pg_regress --config-auth /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_primary_data/pgdata ### Enabling WAL archiving for node "primary" Name: primary Version: 17devel Data directory: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_primary_data/pgdata Backup directory: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_primary_data/backup Archive directory: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_primary_data/archives Connection string: port=60036 host=/var/folders/n8/pp_chp6x3jlfy1n94r6hdjv40000gn/T/vs7pn9WM1a Log file: /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_primary.log ### Starting node "primary" # Running: pg_ctl -w -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_primary_data/pgdata -l /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_primary.log -o --cluster-name=primary start waiting for server to start.... done server started # Postmaster PID for node "primary" is 13994 [22:44:50.498](0.879s) ok 1 - Physical slot reports conflict_reason as NULL # Taking pg_basebackup b1 from node "primary" # Running: pg_basebackup -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_primary_data/backup/b1 -h /var/folders/n8/pp_chp6x3jlfy1n94r6hdjv40000gn/T/vs7pn9WM1a -p 60036 --checkpoint fast --no-sync # Backup finished # Initializing node "standby" from backup "b1" of node "primary" ### Enabling streaming replication for node "standby" ### Enabling WAL restore for node "standby" ### Starting node "standby" # Running: pg_ctl -w -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata -l /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_standby.log -o --cluster-name=standby start waiting for server to start.... done server started # Postmaster PID for node "standby" is 14292 Waiting for replication conn standby's replay_lsn to pass 0/3019E10 on primary done [22:44:52.254](1.756s) # initializing database system by copying initdb template # Running: cp -RPp /Users/admin/pgsql/build/tmp_install/initdb-template /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_subscriber_data/pgdata # Running: /Users/admin/pgsql/build/src/test/regress/pg_regress --config-auth /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_subscriber_data/pgdata ### Starting node "subscriber" # Running: pg_ctl -w -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_subscriber_data/pgdata -l /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_subscriber.log -o --cluster-name=subscriber start waiting for server to start.... done server started # Postmaster PID for node "subscriber" is 14406 [22:44:53.036](0.782s) ok 2 - behaves_ok_inactiveslot on standby created [22:44:53.242](0.206s) ok 3 - behaves_ok_activeslot on standby created Waiting for replication conn standby's replay_lsn to pass 0/3026730 on primary done [22:44:53.371](0.129s) ok 4 - Decoding produced 14 rows (2 BEGIN/COMMIT and 10 rows) Waiting for replication conn standby's replay_lsn to pass 0/3026860 on primary done [22:44:53.446](0.075s) ok 5 - got expected output from SQL decoding session Waiting for replication conn standby's replay_lsn to pass 0/3027410 on primary done [22:44:53.617](0.172s) ok 6 - got same expected output from pg_recvlogical decoding session [22:44:53.717](0.099s) ok 7 - pg_recvlogical acknowledged changes Waiting for replication conn standby's replay_lsn to pass 0/3463408 on primary done [22:44:54.003](0.286s) ok 8 - replaying logical slot from another database fails Waiting for replication conn standby's replay_lsn to pass 0/348AC40 on primary done Waiting for all subscriptions in "subscriber" to synchronize data Waiting for replication conn tap_sub's replay_lsn to pass 0/348AC78 on standby done done Waiting for replication conn standby's replay_lsn to pass 0/348B208 on primary done Waiting for replication conn tap_sub's replay_lsn to pass 0/348B208 on standby done [22:44:54.876](0.873s) ok 9 - check replicated inserts after subscription on standby #### Begin standard error psql::1: NOTICE: dropped replication slot "tap_sub" on publisher #### End standard error ### Stopping node "subscriber" using mode fast # Running: pg_ctl -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_subscriber_data/pgdata -m fast stop waiting for server to shut down.... done server stopped # No postmaster PID for node "subscriber" [22:44:55.465](0.589s) ok 10 - vacuum_full_inactiveslot on standby created [22:44:55.833](0.368s) ok 11 - vacuum_full_activeslot on standby created ### Reloading node "standby" # Running: pg_ctl -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata reload server signaled #### Begin standard error psql::1: INFO: vacuuming "pg_catalog.pg_class" psql::1: INFO: "pg_catalog.pg_class": found 5 removable, 420 nonremovable row versions in 14 pages DETAIL: 0 dead row versions cannot be removed yet. CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s. #### End standard error Waiting for replication conn standby's replay_lsn to pass 0/34BBB58 on primary done [22:44:56.989](1.156s) ok 12 - inactiveslot slot invalidation is logged with vacuum FULL on pg_class [22:44:56.990](0.001s) ok 13 - activeslot slot invalidation is logged with vacuum FULL on pg_class [22:44:57.112](0.123s) ok 14 - confl_active_logicalslot updated [22:44:57.138](0.025s) ok 15 - vacuum_full_activeslot conflict_reason is rows_removed [22:44:57.251](0.114s) ok 16 - vacuum_full_inactiveslot conflict_reason is rows_removed [22:44:57.323](0.071s) ok 17 - pg_recvlogical exited non-zero [22:44:57.323](0.000s) ok 18 - slot has been invalidated ### Reloading node "standby" # Running: pg_ctl -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata reload server signaled ### Restarting node "standby" # Running: pg_ctl -w -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata -l /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_standby.log restart waiting for server to shut down.... done server stopped waiting for server to start.... done server started # Postmaster PID for node "standby" is 15135 [22:44:57.844](0.522s) ok 19 - vacuum_full_activeslot conflict_reason is rows_removed [22:44:57.866](0.022s) ok 20 - vacuum_full_inactiveslot conflict_reason is rows_removed Waiting for replication conn standby's replay_lsn to pass 0/40000F8 on primary done [22:44:58.908](1.042s) ok 21 - invalidated logical slots do not lead to retaining WAL [22:44:59.387](0.479s) ok 22 - row_removal_inactiveslot on standby created [22:44:59.612](0.226s) ok 23 - row_removal_activeslot on standby created ### Reloading node "standby" # Running: pg_ctl -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata reload server signaled #### Begin standard error psql::1: INFO: vacuuming "testdb.pg_catalog.pg_class" psql::1: INFO: finished vacuuming "testdb.pg_catalog.pg_class": index scans: 1 pages: 0 removed, 11 remain, 11 scanned (100.00% of total) tuples: 4 removed, 419 remain, 0 are dead but not yet removable removable cutoff: 754, which was 0 XIDs old when operation ended frozen: 0 pages from table (0.00% of total) had 0 tuples frozen index scan needed: 3 pages from table (27.27% of total) had 5 dead item identifiers removed index "pg_class_oid_index": pages: 4 in total, 0 newly deleted, 0 currently deleted, 0 reusable index "pg_class_relname_nsp_index": pages: 5 in total, 0 newly deleted, 0 currently deleted, 0 reusable index "pg_class_tblspc_relfilenode_index": pages: 2 in total, 0 newly deleted, 0 currently deleted, 0 reusable avg read rate: 121.124 MB/s, avg write rate: 227.108 MB/s buffer usage: 77 hits, 8 misses, 15 dirtied WAL usage: 28 records, 15 full page images, 122841 bytes system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s #### End standard error Waiting for replication conn standby's replay_lsn to pass 0/403EDC0 on primary done [22:44:59.809](0.197s) ok 24 - inactiveslot slot invalidation is logged with vacuum on pg_class [22:44:59.809](0.000s) ok 25 - activeslot slot invalidation is logged with vacuum on pg_class [22:44:59.827](0.017s) ok 26 - confl_active_logicalslot updated [22:44:59.843](0.016s) ok 27 - row_removal_activeslot conflict_reason is rows_removed [22:44:59.860](0.016s) ok 28 - row_removal_inactiveslot conflict_reason is rows_removed [22:44:59.878](0.018s) ok 29 - pg_recvlogical exited non-zero [22:44:59.878](0.000s) ok 30 - slot has been invalidated [22:45:00.235](0.357s) ok 31 - shared_row_removal_inactiveslot on standby created [22:45:00.403](0.168s) ok 32 - shared_row_removal_activeslot on standby created ### Reloading node "standby" # Running: pg_ctl -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata reload server signaled #### Begin standard error psql::1: INFO: vacuuming "testdb.pg_catalog.pg_authid" psql::1: INFO: finished vacuuming "testdb.pg_catalog.pg_authid": index scans: 1 pages: 0 removed, 1 remain, 1 scanned (100.00% of total) tuples: 1 removed, 15 remain, 0 are dead but not yet removable removable cutoff: 757, which was 0 XIDs old when operation ended new relfrozenxid: 757, which is 27 XIDs ahead of previous value frozen: 0 pages from table (0.00% of total) had 0 tuples frozen index scan needed: 1 pages from table (100.00% of total) had 1 dead item identifiers removed index "pg_authid_rolname_index": pages: 2 in total, 0 newly deleted, 0 currently deleted, 0 reusable index "pg_authid_oid_index": pages: 2 in total, 0 newly deleted, 0 currently deleted, 0 reusable avg read rate: 134.698 MB/s, avg write rate: 44.899 MB/s buffer usage: 34 hits, 3 misses, 1 dirtied WAL usage: 6 records, 1 full page images, 8692 bytes system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s psql::1: INFO: vacuuming "testdb.pg_toast.pg_toast_1260" psql::1: INFO: finished vacuuming "testdb.pg_toast.pg_toast_1260": index scans: 0 pages: 0 removed, 0 remain, 0 scanned (100.00% of total) tuples: 0 removed, 0 remain, 0 are dead but not yet removable removable cutoff: 757, which was 0 XIDs old when operation ended new relfrozenxid: 757, which is 27 XIDs ahead of previous value frozen: 0 pages from table (100.00% of total) had 0 tuples frozen index scan not needed: 0 pages from table (100.00% of total) had 0 dead item identifiers removed avg read rate: 120.192 MB/s, avg write rate: 120.192 MB/s buffer usage: 24 hits, 1 misses, 1 dirtied WAL usage: 1 records, 0 full page images, 188 bytes system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s #### End standard error Waiting for replication conn standby's replay_lsn to pass 0/4042098 on primary done [22:45:00.763](0.361s) ok 33 - inactiveslot slot invalidation is logged with vacuum on pg_authid [22:45:00.764](0.000s) ok 34 - activeslot slot invalidation is logged with vacuum on pg_authid [22:45:00.804](0.040s) ok 35 - confl_active_logicalslot updated [22:45:00.825](0.021s) ok 36 - shared_row_removal_activeslot conflict_reason is rows_removed [22:45:00.841](0.016s) ok 37 - shared_row_removal_inactiveslot conflict_reason is rows_removed [22:45:00.859](0.018s) ok 38 - pg_recvlogical exited non-zero [22:45:00.860](0.000s) ok 39 - slot has been invalidated [22:45:01.220](0.360s) ok 40 - no_conflict_inactiveslot on standby created [22:45:01.411](0.191s) ok 41 - no_conflict_activeslot on standby created ### Reloading node "standby" # Running: pg_ctl -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata reload server signaled #### Begin standard error psql::1: INFO: vacuuming "testdb.public.conflict_test" psql::1: INFO: finished vacuuming "testdb.public.conflict_test": index scans: 0 pages: 0 removed, 1 remain, 1 scanned (100.00% of total) tuples: 4 removed, 4 remain, 0 are dead but not yet removable removable cutoff: 761, which was 0 XIDs old when operation ended new relfrozenxid: 760, which is 2 XIDs ahead of previous value frozen: 0 pages from table (0.00% of total) had 0 tuples frozen index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed avg read rate: 71.674 MB/s, avg write rate: 95.566 MB/s buffer usage: 19 hits, 3 misses, 4 dirtied WAL usage: 6 records, 4 full page images, 33238 bytes system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s psql::1: INFO: vacuuming "testdb.pg_toast.pg_toast_16418" psql::1: INFO: finished vacuuming "testdb.pg_toast.pg_toast_16418": index scans: 0 pages: 0 removed, 0 remain, 0 scanned (100.00% of total) tuples: 0 removed, 0 remain, 0 are dead but not yet removable removable cutoff: 761, which was 0 XIDs old when operation ended new relfrozenxid: 761, which is 3 XIDs ahead of previous value frozen: 0 pages from table (100.00% of total) had 0 tuples frozen index scan not needed: 0 pages from table (100.00% of total) had 0 dead item identifiers removed avg read rate: 102.796 MB/s, avg write rate: 0.000 MB/s buffer usage: 24 hits, 1 misses, 0 dirtied WAL usage: 1 records, 0 full page images, 188 bytes system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s #### End standard error Waiting for replication conn standby's replay_lsn to pass 0/404E1F0 on primary done [22:45:01.791](0.380s) ok 42 - inactiveslot slot invalidation is not logged with vacuum on conflict_test [22:45:01.793](0.001s) ok 43 - activeslot slot invalidation is not logged with vacuum on conflict_test [22:45:01.814](0.021s) ok 44 - confl_active_logicalslot not updated [22:45:01.841](0.027s) ok 45 - Logical slots are reported as non conflicting ### Reloading node "standby" # Running: pg_ctl -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata reload server signaled ### Restarting node "standby" # Running: pg_ctl -w -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata -l /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_standby.log restart waiting for server to shut down.... done server stopped waiting for server to start.... done server started # Postmaster PID for node "standby" is 16143 [22:45:02.626](0.785s) ok 46 - pruning_inactiveslot on standby created [22:45:02.827](0.201s) ok 47 - pruning_activeslot on standby created ### Reloading node "standby" # Running: pg_ctl -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata reload server signaled Waiting for replication conn standby's replay_lsn to pass 0/405A6E0 on primary done [22:45:03.079](0.252s) ok 48 - inactiveslot slot invalidation is logged with on-access pruning [22:45:03.079](0.000s) ok 49 - activeslot slot invalidation is logged with on-access pruning [22:45:03.098](0.019s) ok 50 - confl_active_logicalslot updated [22:45:03.115](0.017s) ok 51 - pruning_activeslot conflict_reason is rows_removed [22:45:03.240](0.126s) ok 52 - pruning_inactiveslot conflict_reason is rows_removed [22:45:03.311](0.071s) ok 53 - pg_recvlogical exited non-zero [22:45:03.312](0.001s) ok 54 - slot has been invalidated ### Reloading node "standby" # Running: pg_ctl -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata reload server signaled [22:45:03.744](0.432s) ok 55 - injection_inactiveslot on standby created [22:45:03.831](0.087s) ok 56 - injection_activeslot on standby created ### Reloading node "standby" # Running: pg_ctl -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata reload server signaled Waiting for replication conn standby's replay_lsn to pass 0/406ABC0 on primary done #### Begin standard error psql::1: INFO: vacuuming "testdb.pg_catalog.pg_class" psql::1: INFO: finished vacuuming "testdb.pg_catalog.pg_class": index scans: 1 pages: 0 removed, 11 remain, 11 scanned (100.00% of total) tuples: 0 removed, 426 remain, 0 are dead but not yet removable removable cutoff: 772, which was 0 XIDs old when operation ended frozen: 0 pages from table (0.00% of total) had 0 tuples frozen index scan needed: 2 pages from table (18.18% of total) had 2 dead item identifiers removed index "pg_class_oid_index": pages: 4 in total, 0 newly deleted, 0 currently deleted, 0 reusable index "pg_class_relname_nsp_index": pages: 5 in total, 0 newly deleted, 0 currently deleted, 0 reusable index "pg_class_tblspc_relfilenode_index": pages: 2 in total, 0 newly deleted, 0 currently deleted, 0 reusable avg read rate: 297.053 MB/s, avg write rate: 237.643 MB/s buffer usage: 76 hits, 10 misses, 8 dirtied WAL usage: 13 records, 0 full page images, 1303 bytes system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s #### End standard error [22:45:05.235](1.404s) ok 57 - terminating process holding the active slot is logged with injection point [22:49:20.830](255.595s) # poll_query_until timed out executing this query: # SELECT (SELECT catalog_xmin::text::int - 770 from pg_catalog.pg_replication_slots where slot_name = 'injection_activeslot') > 0 # expecting this output: # t # last actual query output: # f # with stderr: catalog_xmin did not advance at /Users/admin/pgsql/src/test/recovery/t/035_standby_logical_decoding.pl line 808. # Postmaster PID for node "primary" is 13994 ### Stopping node "primary" using mode immediate # Running: pg_ctl -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_primary_data/pgdata -m immediate stop waiting for server to shut down.... done server stopped # No postmaster PID for node "primary" # Postmaster PID for node "standby" is 16143 ### Stopping node "standby" using mode immediate # Running: pg_ctl -D /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata -m immediate stop waiting for server to shut down.... done server stopped # No postmaster PID for node "standby" # No postmaster PID for node "cascading_standby" # No postmaster PID for node "subscriber" [22:49:21.105](0.275s) # Tests were run but no plan was declared and done_testing() was not seen. [22:49:21.105](0.000s) # Looks like your test exited with 60 just after 57.