HewlettPackard / oneview-ansible-collection

Ansible Collection and Sample Playbooks for HPE OneView

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

oneview_server_profile cannot create more than one server profile in parallel

jullienl opened this issue · comments

Re-opening issue we had on the Ansible OneView module, see HewlettPackard/oneview-ansible#313 as I face the same error with the OneView Ansible collection.

TASK [Creating Server Profile "ESX-1-deploy" from Server Profile Template "ESXi7 BFS"] ***********************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: hpeOneView.exceptions.HPEOneViewTaskError: A profile is already assigned to the server hardware {"name":"Frame4, bay 4", "uri":"/rest/server-hardware/39313738-3034-5A43-3231-32343036474B"}.
fatal: [ESX-1-deploy -> localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1635241176.7871196-96277-272638110473477/AnsiballZ_oneview_server_profile.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1635241176.7871196-96277-272638110473477/AnsiballZ_oneview_server_profile.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1635241176.7871196-96277-272638110473477/AnsiballZ_oneview_server_profile.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.hpe.oneview.plugins.modules.oneview_server_profile', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_oneview_server_profile_payload_8h3k37_v/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 693, in <module>\n  File \"/tmp/ansible_oneview_server_profile_payload_8h3k37_v/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 689, in main\n  File \"/tmp/ansible_oneview_server_profile_payload_8h3k37_v/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/module_utils/oneview.py\", line 633, in run\n  File \"/tmp/ansible_oneview_server_profile_payload_8h3k37_v/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 284, in execute_module\n  File \"/tmp/ansible_oneview_server_profile_payload_8h3k37_v/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 324, in __present\n  File \"/tmp/ansible_oneview_server_profile_payload_8h3k37_v/ansible_oneview_server_profile_payload.zip/ansible_collections/hpe/oneview/plugins/modules/oneview_server_profile.py\", line 461, in __create_profile\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/servers/server_profiles.py\", line 74, in create\n    resource_data = self._helper.create(data, timeout=timeout, force=force)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/resource.py\", line 464, in create\n    return self.do_post(uri, data, timeout, custom_headers)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/resource.py\", line 816, in do_post\n    return self._task_monitor.wait_for_task(task, timeout)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/task_monitor.py\", line 82, in wait_for_task\n    task_response = self.__get_task_response(task)\n  File \"/usr/local/lib/python3.6/site-packages/hpeOneView/resources/task_monitor.py\", line 142, in __get_task_response\n    raise HPEOneViewTaskError(msg, error_code)\nhpeOneView.exceptions.HPEOneViewTaskError: A profile is already assigned to the server hardware {\"name\":\"Frame4, bay 4\", \"uri\":\"/rest/server-hardware/39313738-3034-5A43-3231-32343036474B\"}.\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
changed: [ESX-2-deploy -> localhost]

Inventory file has the following content:

[ESX]
ESX-1-deploy host_management_ip=192.168.3.171 
ESX-2-deploy host_management_ip=192.168.3.175 

Playbook is :

# Creating a Server Profile in HPE OneView from a boot from SAN Server Profile Template:

    - name: Creating Server Profile "{{ inventory_hostname }}" from Server Profile Template "{{ server_template }}"
      oneview_server_profile:
        config: "{{ config }}"
        data:
          serverProfileTemplateName: "{{ server_template }}"
          name: "{{ inventory_hostname }}"
        # serverHardwareUri: "/rest/server-hardware/39313738-3234-584D-5138-323830343848"
        # server_hardware: Encl1, bay 12
        # If any hardware is provided, it tries to get one available
      delegate_to: localhost
      register: result

Environment Details

  • Ansible 2.9.25 - Python 3.6.8 - python-hpOneView 6.30
  • Ansible Collection for HPE OneView 6.30
  • HPE OneView 6.30
  • Synergy 480 Gen10
  • SSP 2021-05.03

Hi @jullienl,
Looks like you are trying to create multiple server profiles using single server hardware.
Where Server hardware can have only one assigned profile assigned.
Please assign different server hardwares and give it a try and do let us know if you still face any issues.

Well... I don't want to define a Server Hardware in my playbook,

The module documentation says that if you don't specify a Server Hardware, the module tries to select an available one... this works for one creation but not for several. The module should support the auto Server Hardware selection whether you have one or more creation requests...

This issue existed in previous library and has been fixed in https://github.com/HewlettPackard/oneview-ansible/tree/bug_fix/create_profiles_in_parallel branch and my playbook works successfully with this fix on the previous library but not with this collection.

Thanks

hi @jullienl
I tried below playbook and it worked as expected i.e., it successfully created 2 server profiles in parallel fashion without any error. And I have also checked the creation time of server profiles to make sure that they are running in parallel

  • name: Create a Server Profile from a Server Profile Template
    oneview_server_profile:
    config: "{{ config }}"
    data:
    serverProfileTemplateName: "{{ item.ov_template }}"
    name: "{{ item.name }}"
    description: "Test Description"
    initialScopeUris:
    - "{{ scopes[0].uri }}"
    params: # Supported only in API version >= 600
    force: True
    delegate_to: localhost
    register: result
    async: 10
    poll: 0
    with_items:
    • { name: 'sp_1', ov_template: 'SptFirmware' }
    • { name: 'sp_2', ov_template: 'test'}

Hi @chebroluharika

I'm glad it works with with_items but it is not the method I use to create profiles in parallel.

This is when you use an inventory hosts file with multiple hosts defined in a group that the server profile creation fails.

ansible-playbook -i hosts RHEL_provisioning.yml

This is working when deleting server profiles but not for creating as your module select the same bay for the other profile resources.

Hi @jullienl ,
Can you please try with below playbook having async and poll as those are the features provided by ansible to support parallel operation.
- name: Creating Server Profile "{{ inventory_hostname }}" from Server Profile Template "{{ server_template }}"
oneview_server_profile:
config: "{{ config }}"
data:
serverProfileTemplateName: "{{ server_template }}"
name: "{{ inventory_hostname }}"
# serverHardwareUri: "/rest/server-hardware/39313738-3234-584D-5138-323830343848"
# server_hardware: Encl1, bay 12
# If any hardware is provided, it tries to get one available
delegate_to: localhost
register: result
async: 10
poll: 0

The Async option does not help. Only one profile is created, and no errors are reported.

Another problem with using Async is that the next task in my playbook fails because it is executed immediately after the profile creation task and therefore before the creation is completed. This is not what we want...