Ansible 循环 URI 调用的 JSON 输出 [英] Ansible loop over JSON output from URI Call

查看:32
本文介绍了Ansible 循环 URI 调用的 JSON 输出的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经尝试让它工作一段时间了,但我无法这样做.我希望这是我遗漏的很小的东西.

I have been trying to get this to work for sometime now but I am unable to do so. I am hoping it's something really small that I am missing.

我正在尝试解析使用 with_items 的任务的 JSON 输出.我知道最终组件的变量列表将包含在结果数组中.

I am trying to parse JSON output from a task which is using with_items. I understand that eventually the variable list of components is going to have in a results array.

   - name: Get list of components for each host
      uri: url="http://192.168.0.11:8080/api/v1/clusters/pstl-cluster/hosts/{{ hostvars[item].ansible_host_fqdn }}"
         method=GET
         force_basic_auth=yes
         user=admin
         password=admin
         HEADER_X-Requested-By="ambari"
         status_code=200,201,202
         return_content=yes
      register: list_of_components
      with_items: "{{ groups['hadoop_cluster'] }}"

#   - debug: msg="Components are {{ (list_of_components.results|from_json)|json_query('content.host_components[*].HostRoles.component_name') }}"
    #- debug: var=list_of_components
#   - debug: msg="Components are {{ list_of_components.results[0].item.content.host_components[*].HostRoles.component_name }}"
    - debug: msg="Components are {{ item }}"
      with_items: "{{ list_of_components.results|from_json }}"

下面发布了获取每个主机的组件列表"任务的示例调试输出.我正在尝试通过在下一个任务中循环来获取 (host, component_name) 的元组

Sample debug output from the "Get list of components for each host" task is posted below. I am trying to get a tuple of (host, component_name) which I can by looping over in the next task

我有没有提到,我对 Ansible 的理解相当有限.

Did I mention, my understanding of Ansible is fairly limited.

ok: [localhost] => (item=slave2) => {
    "changed": false,
    "connection": "close",
    "content": "{\n  \"href\" : \"http://192.168.0.11:8080/api/v1/clusters/pstl-cluster/hosts/slave2.mycluster\",\n  \"Hosts\" : {\n    \"cluster_name\" : \"pstl-cluster\",\n    \"cpu_count\" : 1,\n    \"disk_info\" : [\n      {\n        \"available\" : \"36725348\",\n        \"device\" : \"/dev/mapper/centos-root\",\n        \"used\" : \"2540208\",\n        \"percent\" : \"7%\",\n        \"size\" : \"39265556\",\n        \"type\" : \"xfs\",\n        \"mountpoint\" : \"/\"\n      },\n      {\n        \"available\" : \"932372\",\n        \"device\" : \"devtmpfs\",\n        \"used\" : \"0\",\n        \"percent\" : \"0%\",\n        \"size\" : \"932372\",\n        \"type\" : \"devtmpfs\",\n        \"mountpoint\" : \"/dev\"\n      },\n      {\n        \"available\" : \"942088\",\n        \"device\" : \"tmpfs\",\n        \"used\" : \"0\",\n        \"percent\" : \"0%\",\n        \"size\" : \"942088\",\n        \"type\" : \"tmpfs\",\n        \"mountpoint\" : \"/dev/shm\"\n      },\n      {\n        \"available\" : \"933636\",\n        \"device\" : \"tmpfs\",\n        \"used\" : \"8452\",\n        \"percent\" : \"1%\",\n        \"size\" : \"942088\",\n        \"type\" : \"tmpfs\",\n        \"mountpoint\" : \"/run\"\n      },\n      {\n        \"available\" : \"341996\",\n        \"device\" : \"/dev/sda1\",\n        \"used\" : \"166592\",\n        \"percent\" : \"33%\",\n        \"size\" : \"508588\",\n        \"type\" : \"xfs\",\n        \"mountpoint\" : \"/boot\"\n      },\n      {\n        \"available\" : \"75495164\",\n        \"device\" : \"vagrant\",\n        \"used\" : \"168429828\",\n        \"percent\" : \"70%\",\n        \"size\" : \"243924992\",\n        \"type\" : \"vboxsf\",\n        \"mountpoint\" : \"/vagrant\"\n      },\n      {\n        \"available\" : \"75495164\",\n        \"device\" : \"vagrant_data\",\n        \"used\" : \"168429828\",\n        \"percent\" : \"70%\",\n        \"size\" : \"243924992\",\n        \"type\" : \"vboxsf\",\n        \"mountpoint\" : \"/vagrant_data\"\n      }\n    ],\n    \"host_health_report\" : \"\",\n    \"host_name\" : \"slave2.mycluster\",\n    \"host_state\" : \"HEALTHY\",\n    \"host_status\" : \"UNHEALTHY\",\n    \"ip\" : \"192.168.0.22\",\n    \"last_agent_env\" : {\n      \"stackFoldersAndFiles\" : [ ],\n      \"alternatives\" : [ ],\n      \"existingUsers\" : [ ],\n      \"existingRepos\" : [ ],\n      \"installedPackages\" : [ ],\n      \"hostHealth\" : {\n        \"activeJavaProcs\" : [ ],\n        \"agentTimeStampAtReporting\" : 1481214773737,\n        \"serverTimeStampAtReporting\" : 1481214771036,\n        \"liveServices\" : [\n          {\n            \"desc\" : \"● ntpd.service - Network Time Service\\n   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)\\n   Active: inactive (dead)\\n\\nDec 07 20:16:57 slave2.mycluster systemd[1]: Stopped Network Time Service.\\n\",\n            \"status\" : \"Unhealthy\",\n            \"name\" : \"ntpd\"\n          }\n        ]\n      },\n      \"umask\" : 18,\n      \"transparentHugePage\" : \"\",\n      \"firewallRunning\" : false,\n      \"firewallName\" : \"iptables\",\n      \"reverseLookup\" : true\n    },\n    \"last_heartbeat_time\" : 1481214800798,\n    \"last_registration_time\" : 1481160539338,\n    \"maintenance_state\" : \"OFF\",\n    \"os_arch\" : \"x86_64\",\n    \"os_family\" : \"redhat7\",\n    \"os_type\" : \"centos7\",\n    \"ph_cpu_count\" : 1,\n    \"public_host_name\" : \"slave2.mycluster\",\n    \"rack_info\" : \"/default-rack\",\n    \"recovery_report\" : {\n      \"summary\" : \"RECOVERABLE\",\n      \"component_reports\" : [ ]\n    },\n    \"recovery_summary\" : \"RECOVERABLE\",\n    \"total_mem\" : 1884176,\n    \"desired_configs\" : {\n      \"ams-env\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ams-grafana-env\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ams-grafana-ini\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ams-hbase-env\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ams-hbase-log4j\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ams-hbase-policy\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ams-hbase-security-site\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ams-hbase-site\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ams-log4j\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ams-site\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ams-ssl-client\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ams-ssl-server\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"capacity-scheduler\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"cluster-env\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"core-site\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"falcon-env\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"falcon-runtime.properties\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"falcon-startup.properties\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"hadoop-env\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"hadoop-policy\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"hbase-env\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"hbase-log4j\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"hbase-policy\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"hbase-site\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"hcat-env\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"hdfs-log4j\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"hdfs-site\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"hive-env\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"hive-exec-log4j\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"hive-log4j\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"hive-site\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"hiveserver2-site\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"kafka-broker\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"kafka-env\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"kafka-log4j\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"mapred-env\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"mapred-site\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"oozie-env\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"oozie-log4j\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"oozie-site\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"pig-env\" : {\n        \"default\" : \"INITIAL\"\n      },\n      \"pig-log4j\" : {\n        \"default\" : \"INITIAL\"\n      },\n      \"pig-properties\" : {\n        \"default\" : \"INITIAL\"\n      },\n      \"ranger-hbase-audit\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-hbase-plugin-properties\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-hbase-policymgr-ssl\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-hbase-security\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-hdfs-audit\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-hdfs-plugin-properties\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-hdfs-policymgr-ssl\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-hdfs-security\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-hive-audit\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-hive-plugin-properties\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-hive-policymgr-ssl\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-hive-security\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-kafka-audit\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-kafka-plugin-properties\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-kafka-policymgr-ssl\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-kafka-security\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-yarn-audit\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-yarn-plugin-properties\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-yarn-policymgr-ssl\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ranger-yarn-security\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"slider-client\" : {\n        \"default\" : \"INITIAL\"\n      },\n      \"slider-env\" : {\n        \"default\" : \"INITIAL\"\n      },\n      \"slider-log4j\" : {\n        \"default\" : \"INITIAL\"\n      },\n      \"spark-defaults\" : {\n        \"default\" : \"INITIAL\"\n      },\n      \"spark-env\" : {\n        \"default\" : \"INITIAL\"\n      },\n      \"spark-hive-site-override\" : {\n        \"default\" : \"INITIAL\"\n      },\n      \"spark-log4j-properties\" : {\n        \"default\" : \"INITIAL\"\n      },\n      \"spark-metrics-properties\" : {\n        \"default\" : \"INITIAL\"\n      },\n      \"spark-thrift-fairscheduler\" : {\n        \"default\" : \"INITIAL\"\n      },\n      \"spark-thrift-sparkconf\" : {\n        \"default\" : \"INITIAL\"\n      },\n      \"ssl-client\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"ssl-server\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"tez-env\" : {\n        \"default\" : \"INITIAL\"\n      },\n      \"tez-site\" : {\n        \"default\" : \"INITIAL\"\n      },\n      \"webhcat-env\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"webhcat-log4j\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"webhcat-site\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"yarn-env\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"yarn-log4j\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"yarn-site\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"zoo.cfg\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"zookeeper-env\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      },\n      \"zookeeper-log4j\" : {\n        \"default\" : \"TOPOLOGY_RESOLVED\"\n      }\n    }\n  },\n  \"alerts_summary\" : {\n    \"CRITICAL\" : 3,\n    \"MAINTENANCE\" : 0,\n    \"OK\" : 2,\n    \"UNKNOWN\" : 0,\n    \"WARNING\" : 0\n  },\n  \"kerberos_identities\" : [\n    {\n      \"href\" : \"http://192.168.0.11:8080/api/v1/clusters/pstl-cluster/hosts/slave2.mycluster/kerberos_identities/HTTP%2Fslave2.mycluster%40%24%7Bkerberos-env%2Frealm%7D\",\n      \"KerberosIdentity\" : {\n        \"cluster_name\" : \"pstl-cluster\",\n        \"host_name\" : \"slave2.mycluster\",\n        \"principal_name\" : \"HTTP/slave2.mycluster@${kerberos-env/realm}\"\n      }\n    },\n    {\n      \"href\" : \"http://192.168.0.11:8080/api/v1/clusters/pstl-cluster/hosts/slave2.mycluster/kerberos_identities/ambari-qa-pstl-cluster%40%24%7Bkerberos-env%2Frealm%7D\",\n      \"KerberosIdentity\" : {\n        \"cluster_name\" : \"pstl-cluster\",\n        \"host_name\" : \"slave2.mycluster\",\n        \"principal_name\" : \"ambari-qa-pstl-cluster@${kerberos-env/realm}\"\n      }\n    },\n    {\n      \"href\" : \"http://192.168.0.11:8080/api/v1/clusters/pstl-cluster/hosts/slave2.mycluster/kerberos_identities/kafka%2Fslave2.mycluster%40%24%7Bkerberos-env%2Frealm%7D\",\n      \"KerberosIdentity\" : {\n        \"cluster_name\" : \"pstl-cluster\",\n        \"host_name\" : \"slave2.mycluster\",\n        \"principal_name\" : \"kafka/slave2.mycluster@${kerberos-env/realm}\"\n      }\n    },\n    {\n      \"href\" : \"http://192.168.0.11:8080/api/v1/clusters/pstl-cluster/hosts/slave2.mycluster/kerberos_identities/zookeeper%2Fslave2.mycluster%40%24%7Bkerberos-env%2Frealm%7D\",\n      \"KerberosIdentity\" : {\n        \"cluster_name\" : \"pstl-cluster\",\n        \"host_name\" : \"slave2.mycluster\",\n        \"principal_name\" : \"zookeeper/slave2.mycluster@${kerberos-env/realm}\"\n      }\n    }\n  ],\n  \"alerts\" : [\n    {\n      \"href\" : \"http://192.168.0.11:8080/api/v1/clusters/pstl-cluster/hosts/slave2.mycluster/alerts/1\",\n      \"Alert\" : {\n        \"cluster_name\" : \"pstl-cluster\",\n        \"definition_id\" : 70,\n        \"definition_name\" : \"ambari_agent_disk_usage\",\n        \"host_name\" : \"slave2.mycluster\",\n        \"id\" : 1,\n        \"service_name\" : \"AMBARI\"\n      }\n    },\n    {\n      \"href\" : \"http://192.168.0.11:8080/api/v1/clusters/pstl-cluster/hosts/slave2.mycluster/alerts/6\",\n      \"Alert\" : {\n        \"cluster_name\" : \"pstl-cluster\",\n        \"definition_id\" : 71,\n        \"definition_name\" : \"ambari_server_agent_heartbeat\",\n        \"host_name\" : \"slave2.mycluster\",\n        \"id\" : 6,\n        \"service_name\" : \"AMBARI\"\n      }\n    },\n    {\n      \"href\" : \"http://192.168.0.11:8080/api/v1/clusters/pstl-cluster/hosts/slave2.mycluster/alerts/8\",\n      \"Alert\" : {\n        \"cluster_name\" : \"pstl-cluster\",\n        \"definition_id\" : 52,\n        \"definition_name\" : \"ams_metrics_monitor_process\",\n        \"host_name\" : \"slave2.mycluster\",\n        \"id\" : 8,\n        \"service_name\" : \"AMBARI_METRICS\"\n      }\n    },\n    {\n      \"href\" : \"http://192.168.0.11:8080/api/v1/clusters/pstl-cluster/hosts/slave2.mycluster/alerts/9\",\n      \"Alert\" : {\n        \"cluster_name\" : \"pstl-cluster\",\n        \"definition_id\" : 63,\n        \"definition_name\" : \"kafka_broker_process\",\n        \"host_name\" : \"slave2.mycluster\",\n        \"id\" : 9,\n        \"service_name\" : \"KAFKA\"\n      }\n    },\n    {\n      \"href\" : \"http://192.168.0.11:8080/api/v1/clusters/pstl-cluster/hosts/slave2.mycluster/alerts/7\",\n      \"Alert\" : {\n        \"cluster_name\" : \"pstl-cluster\",\n        \"definition_id\" : 69,\n        \"definition_name\" : \"zookeeper_server_process\",\n        \"host_name\" : \"slave2.mycluster\",\n        \"id\" : 7,\n        \"service_name\" : \"ZOOKEEPER\"\n      }\n    }\n  ],\n  \"stack_versions\" : [\n    {\n      \"href\" : \"http://192.168.0.11:8080/api/v1/clusters/pstl-cluster/hosts/slave2.mycluster/stack_versions/1\",\n      \"HostStackVersions\" : {\n        \"cluster_name\" : \"pstl-cluster\",\n        \"host_name\" : \"slave2.mycluster\",\n        \"id\" : 1,\n        \"repository_version\" : 1,\n        \"stack\" : \"HDP\",\n        \"version\" : \"2.4\"\n      }\n    }\n  ],\n  \"host_components\" : [\n    {\n      \"href\" : \"http://192.168.0.11:8080/api/v1/clusters/pstl-cluster/hosts/slave2.mycluster/host_components/KAFKA_BROKER\",\n      \"HostRoles\" : {\n        \"cluster_name\" : \"pstl-cluster\",\n        \"component_name\" : \"KAFKA_BROKER\",\n        \"host_name\" : \"slave2.mycluster\"\n      }\n    },\n    {\n      \"href\" : \"http://192.168.0.11:8080/api/v1/clusters/pstl-cluster/hosts/slave2.mycluster/host_components/METRICS_MONITOR\",\n      \"HostRoles\" : {\n        \"cluster_name\" : \"pstl-cluster\",\n        \"component_name\" : \"METRICS_MONITOR\",\n        \"host_name\" : \"slave2.mycluster\"\n      }\n    },\n    {\n      \"href\" : \"http://192.168.0.11:8080/api/v1/clusters/pstl-cluster/hosts/slave2.mycluster/host_components/ZOOKEEPER_SERVER\",\n      \"HostRoles\" : {\n        \"cluster_name\" : \"pstl-cluster\",\n        \"component_name\" : \"ZOOKEEPER_SERVER\",\n        \"host_name\" : \"slave2.mycluster\"\n      }\n    }\n  ]\n}",
    "content_type": "text/plain",
    "expires": "Thu, 01 Jan 1970 00:00:00 GMT",
    "invocation": {
        "module_args": {
            "HEADER_X-Requested-By": "ambari",
            "backup": null,
            "body": null,
            "body_format": "raw",
            "content": null,
            "creates": null,
            "delimiter": null,
            "dest": null,
            "directory_mode": null,
            "follow": false,
            "follow_redirects": "safe",
            "force": false,
            "force_basic_auth": true,
            "group": null,
            "headers": {
                "Authorization": "Basic YWRtaW46YWRtaW4=",
                "X-Requested-By": "ambari"
            },
            "http_agent": "ansible-httpget",
            "method": "GET",
            "mode": null,
            "owner": null,
            "password": "admin",
            "regexp": null,
            "remote_src": null,
            "removes": null,
            "return_content": true,
            "selevel": null,
            "serole": null,
            "setype": null,
            "seuser": null,
            "src": null,
            "status_code": [
                "200",
                "201",
                "202",
                "404"
            ],
            "timeout": 30,
            "unsafe_writes": null,
            "url": "<link_to_url>",
            "url_password": "admin",
            "url_username": "admin",
            "use_proxy": true,
            "user": "admin",
            "validate_certs": true
        },
        "module_name": "uri"
    },
    "item": "slave2",
    "msg": "OK (unknown bytes)",
    "redirected": false,
    "server": "Jetty(8.1.17.v20150415)",
    "set_cookie": "AMBARISESSIONID=1izzm1ej0m6baujfzms48zhau;Path=/;HttpOnly",
    "status": 200,
    "url": "<link_to_uri>",
    "user": "admin",
    "vary": "Accept-Encoding, User-Agent",
    "x_frame_options": "DENY",
    "x_xss_protection": "1; mode=block"
}

推荐答案

尝试:

- debug: msg="{{ item.host_name }} {{ item.component_name }}"
  with_items: "{{ list_of_components.results | map(attribute='content') | map('from_json') | map(attribute='host_components') | sum(start=[]) | map(attribute='HostRoles') | list }}"

代码未经测试!

想法是:取list_of_components.results,只取content,应用from_json过滤器,只取host_components 列表,用 sum 将组件列表平展为单个列表,从中取出 HostRoles,转换为 list(来自地图结果).

The idea is: take list_of_components.results, take only content out of it, apply from_json filter, take only host_components list out of it, flatten list of components lists into single list with sum, take HostRoles out of it, cast to list (from map result).

所以这段代码应该给你一个此类项目的列表:

So this code should give you a list of this kind of items:

{
    "cluster_name" : "pstl-cluster",
    "component_name" : "METRICS_MONITOR",
    "host_name" : "slave2.mycluster"
}

这篇关于Ansible 循环 URI 调用的 JSON 输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆