ansible-playbook 2.9.27 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible-playbook python version = 2.7.5 (default, Nov 14 2023, 16:14:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] Using /etc/ansible/ansible.cfg as config file Skipping callback 'actionable', as we already have a stdout callback. Skipping callback 'counter_enabled', as we already have a stdout callback. Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'full_skip', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'null', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. Skipping callback 'selective', as we already have a stdout callback. Skipping callback 'skippy', as we already have a stdout callback. Skipping callback 'stderr', as we already have a stdout callback. Skipping callback 'unixy', as we already have a stdout callback. Skipping callback 'yaml', as we already have a stdout callback. PLAYBOOK: tests_qnetd_and_cluster.yml ****************************************** 2 plays in /tmp/collections-BCh/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_qnetd_and_cluster.yml PLAY [all] ********************************************************************* META: ran handlers TASK [Include vault variables] ************************************************* task path: /tmp/collections-BCh/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_qnetd_and_cluster.yml:5 ok: [managed-node1] => {"ansible_facts": {"ha_cluster_hacluster_password": {"__ansible_vault": "$ANSIBLE_VAULT;1.1;AES256\n31303833633366333561656439323930303361333161363239346166656537323933313436\n3432386236656563343237306335323637396239616230353561330a313731623238393238\n62343064666336643930663239383936616465643134646536656532323461356237646133\n3761616633323839633232353637366266350a313163633236376666653238633435306565\n3264623032333736393535663833\n"}}, "ansible_included_var_files": ["/tmp/ha_cluster-Rli/tests/vars/vault-variables.yml"], "changed": false} META: ran handlers META: ran handlers PLAY [Ensure a cluster and a qnetd cannot be configured on the same host] ****** META: ran handlers TASK [Set up test environment] ************************************************* task path: /tmp/collections-BCh/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_qnetd_and_cluster.yml:19 ERROR! the role 'fedora.linux_system_roles.ha_cluster' was not found in /tmp/collections-BCh/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/roles:/root/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/tmp/collections-BCh/ansible_collections/fedora/linux_system_roles/tests/ha_cluster The error appears to be in '/tmp/collections-BCh/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_qnetd_and_cluster.yml': line 21, column 19, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: include_role: name: fedora.linux_system_roles.ha_cluster ^ here TASK [Check errors] ************************************************************ task path: /tmp/collections-BCh/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_qnetd_and_cluster.yml:28 fatal: [managed-node1]: FAILED! => {"msg": "The conditional check ''Qnetd cannot be configured on a cluster node' in ansible_failed_result.msg' failed. The error was: error while evaluating conditional ('Qnetd cannot be configured on a cluster node' in ansible_failed_result.msg): 'ansible_failed_result' is undefined"} PLAY RECAP ********************************************************************* managed-node1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0