# Checking port 57689 # Found port 57689 Name: primary Data directory: /tmp/cirrus-ci-build/build-32/testrun/recovery/010_logical_decoding_timelines/data/t_010_logical_decoding_timelines_primary_data/pgdata Backup directory: /tmp/cirrus-ci-build/build-32/testrun/recovery/010_logical_decoding_timelines/data/t_010_logical_decoding_timelines_primary_data/backup Archive directory: /tmp/cirrus-ci-build/build-32/testrun/recovery/010_logical_decoding_timelines/data/t_010_logical_decoding_timelines_primary_data/archives Connection string: port=57689 host=/tmp/Fus4pkkPnI Log file: /tmp/cirrus-ci-build/build-32/testrun/recovery/010_logical_decoding_timelines/log/010_logical_decoding_timelines_primary.log [12:48:03.605](0.013s) # initializing database system by copying initdb template # Running: cp -RPp /tmp/cirrus-ci-build/build-32/tmp_install/initdb-template /tmp/cirrus-ci-build/build-32/testrun/recovery/010_logical_decoding_timelines/data/t_010_logical_decoding_timelines_primary_data/pgdata # Running: /tmp/cirrus-ci-build/build-32/src/test/regress/pg_regress --config-auth /tmp/cirrus-ci-build/build-32/testrun/recovery/010_logical_decoding_timelines/data/t_010_logical_decoding_timelines_primary_data/pgdata ### Enabling WAL archiving for node "primary" Name: primary Version: 17devel Data directory: /tmp/cirrus-ci-build/build-32/testrun/recovery/010_logical_decoding_timelines/data/t_010_logical_decoding_timelines_primary_data/pgdata Backup directory: /tmp/cirrus-ci-build/build-32/testrun/recovery/010_logical_decoding_timelines/data/t_010_logical_decoding_timelines_primary_data/backup Archive directory: /tmp/cirrus-ci-build/build-32/testrun/recovery/010_logical_decoding_timelines/data/t_010_logical_decoding_timelines_primary_data/archives Connection string: port=57689 host=/tmp/Fus4pkkPnI Log file: /tmp/cirrus-ci-build/build-32/testrun/recovery/010_logical_decoding_timelines/log/010_logical_decoding_timelines_primary.log ### Starting node "primary" # Running: pg_ctl -w -D /tmp/cirrus-ci-build/build-32/testrun/recovery/010_logical_decoding_timelines/data/t_010_logical_decoding_timelines_primary_data/pgdata -l /tmp/cirrus-ci-build/build-32/testrun/recovery/010_logical_decoding_timelines/log/010_logical_decoding_timelines_primary.log -o --cluster-name=primary start pg_ctl: another server might be running; trying to start server anyway waiting for server to start.... stopped waiting pg_ctl: could not start server Examine the log output. # pg_ctl start failed; logfile: 2024-02-23 12:48:03.649 UTC [58901][postmaster] DEBUG: registering background worker "logical replication launcher" 2024-02-23 12:48:03.650 UTC [58901][postmaster] DEBUG: mmap(4194304) with MAP_HUGETLB failed, huge pages disabled: Cannot allocate memory 2024-02-23 12:48:03.650 UTC [58901][postmaster] DEBUG: dynamic shared memory system will support 184 segments 2024-02-23 12:48:03.650 UTC [58901][postmaster] DEBUG: created dynamic shared memory control segment 2345915966 (4428 bytes) 2024-02-23 12:48:03.650 UTC [58901][postmaster] DEBUG: max_safe_fds = 986, usable_fds = 1000, already_open = 4 2024-02-23 12:48:03.650 UTC [58901][postmaster] LOG: starting PostgreSQL 17devel on x86-linux, compiled by gcc-10.2.1, 32-bit 2024-02-23 12:48:03.650 UTC [58901][postmaster] LOG: listening on Unix socket "/tmp/Fus4pkkPnI/.s.PGSQL.57689" 2024-02-23 12:48:03.651 UTC [58905][checkpointer] DEBUG: checkpointer updated shared memory configuration values 2024-02-23 12:48:03.652 UTC [58907][startup] LOG: database system was interrupted; last known up at 2024-02-23 12:48:01 UTC 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: removing all temporary WAL segments 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: checkpoint record is at 0/1000024 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: redo record is at 0/1000024; shutdown true 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: next transaction ID: 3; next OID: 10000 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: next MultiXactId: 1; next MultiXactOffset: 0 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: oldest unfrozen transaction ID: 3, in database 1 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: oldest MultiXactId: 1, in database 1 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: commit timestamp Xid oldest/newest: 0/0 2024-02-23 12:48:03.652 UTC [58907][startup] LOG: database system was not properly shut down; automatic recovery in progress 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: transaction ID wrap limit is 2147483650, limited by database with OID 1 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: MultiXactId wrap limit is 2147483648, limited by database with OID 1 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: starting up replication slots 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: xmin required by slots: data 0, catalog 0 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: starting up replication origin progress state 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: didn't need to unlink permanent stats file "pg_stat/pgstat.stat" - didn't exist 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: resetting unlogged relations: cleanup 1 init 0 2024-02-23 12:48:03.652 UTC [58907][startup] LOG: invalid record length at 0/100008C: expected at least 24, got 0 2024-02-23 12:48:03.652 UTC [58907][startup] LOG: redo is not required 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: resetting unlogged relations: cleanup 0 init 1 2024-02-23 12:48:03.652 UTC [58901][postmaster] DEBUG: postmaster received pmsignal signal 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: MultiXactId wrap limit is 2147483648, limited by database with OID 1 2024-02-23 12:48:03.652 UTC [58907][startup] DEBUG: MultiXact member stop limit is now 4294914944 based on MultiXact 1 2024-02-23 12:48:03.652 UTC [58905][checkpointer] LOG: checkpoint starting: end-of-recovery immediate wait 2024-02-23 12:48:03.652 UTC [58905][checkpointer] DEBUG: performing replication slot checkpoint TRAP: failed Assert("TYPEALIGN(8, (uintptr_t)(currval)) == (uintptr_t)(currval)"), File: "../src/include/port/atomics.h", Line: 571, PID: 58905 postgres: primary: checkpointer performing end-of-recovery checkpoint(ExceptionalCondition+0x69)[0x5745450c] postgres: primary: checkpointer performing end-of-recovery checkpoint(+0x4e8596)[0x56b00596] postgres: primary: checkpointer performing end-of-recovery checkpoint(XLogFlush+0x1b3)[0x56b0c02a] postgres: primary: checkpointer performing end-of-recovery checkpoint(CreateCheckPoint+0x8b9)[0x56b0e6fb] postgres: primary: checkpointer performing end-of-recovery checkpoint(CheckpointerMain+0x76d)[0x5700088a] postgres: primary: checkpointer performing end-of-recovery checkpoint(AuxiliaryProcessMain+0x1bf)[0x56ffb0e6] postgres: primary: checkpointer performing end-of-recovery checkpoint(+0x9eeda5)[0x57006da5] postgres: primary: checkpointer performing end-of-recovery checkpoint(PostmasterMain+0x154d)[0x5700b628] postgres: primary: checkpointer performing end-of-recovery checkpoint(main+0x405)[0x56e23e88] /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0x106)[0xf6a2ee46] postgres: primary: checkpointer performing end-of-recovery checkpoint(_start+0x31)[0x56939251] 2024-02-23 12:48:03.978 UTC [58901][postmaster] LOG: checkpointer process (PID 58905) was terminated by signal 6: Aborted 2024-02-23 12:48:03.978 UTC [58901][postmaster] LOG: terminating any other active server processes 2024-02-23 12:48:03.978 UTC [58901][postmaster] DEBUG: sending SIGQUIT to process 58907 2024-02-23 12:48:03.978 UTC [58901][postmaster] DEBUG: sending SIGQUIT to process 58906 2024-02-23 12:48:03.978 UTC [58901][postmaster] LOG: shutting down because restart_after_crash is off 2024-02-23 12:48:03.978 UTC [58901][postmaster] DEBUG: cleaning up dynamic shared memory control segment with ID 2345915966 2024-02-23 12:48:03.979 UTC [58901][postmaster] LOG: database system is shut down # No postmaster PID for node "primary" [12:48:04.042](0.437s) Bail out! pg_ctl start failed # No postmaster PID for node "primary"