Commit 9ecb25f7 authored by Spiros Koulouzis's avatar Spiros Koulouzis

added one more check to produce plan and docker-compose

parent 74c03bd5
...@@ -72,7 +72,7 @@ ...@@ -72,7 +72,7 @@
<h1 class="page-header">Files and Libraries</h1> <h1 class="page-header">Files and Libraries</h1>
<h3 id="artifact_gwt_json_overlay">GWT JSON Overlay</h3> <h3 id="artifact_gwt_json_overlay">GWT JSON Overlay</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p> <p> <p> <p>
The <a href="http://code.google.com/webtoolkit/">Google Web Toolkit</a> JSON Overlay library provides the JSON Overlays that The <a href="http://code.google.com/webtoolkit/">Google Web Toolkit</a> JSON Overlay library provides the JSON Overlays that
can be used to access the Web service API for this application. can be used to access the Web service API for this application.
...@@ -97,7 +97,7 @@ ...@@ -97,7 +97,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_java_json_client_library">Java JSON Client Library</h3> <h3 id="artifact_java_json_client_library">Java JSON Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The Java client-side library is used to provide the set of Java objects that can be serialized The Java client-side library is used to provide the set of Java objects that can be serialized
to/from JSON using <a href="http://jackson.codehaus.org/">Jackson</a>. This is useful for accessing the to/from JSON using <a href="http://jackson.codehaus.org/">Jackson</a>. This is useful for accessing the
...@@ -127,7 +127,7 @@ ...@@ -127,7 +127,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_java_xml_client_library">Java XML Client Library</h3> <h3 id="artifact_java_xml_client_library">Java XML Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The Java client-side library is used to access the Web service API for this application using Java. The Java client-side library is used to access the Web service API for this application using Java.
</p> </p>
...@@ -155,7 +155,7 @@ ...@@ -155,7 +155,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_js_client_library">JavaScript Client Library</h3> <h3 id="artifact_js_client_library">JavaScript Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The JavaScript client-side library defines classes that can be (de)serialized to/from JSON. The JavaScript client-side library defines classes that can be (de)serialized to/from JSON.
This is useful for accessing the resources that are published by this application, but only This is useful for accessing the resources that are published by this application, but only
...@@ -190,7 +190,7 @@ ...@@ -190,7 +190,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_php_json_client_library">PHP JSON Client Library</h3> <h3 id="artifact_php_json_client_library">PHP JSON Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The PHP JSON client-side library defines the PHP classes that can be (de)serialized to/from JSON. The PHP JSON client-side library defines the PHP classes that can be (de)serialized to/from JSON.
This is useful for accessing the resources that are published by this application, but only This is useful for accessing the resources that are published by this application, but only
...@@ -219,7 +219,7 @@ ...@@ -219,7 +219,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_php_xml_client_library">PHP XML Client Library</h3> <h3 id="artifact_php_xml_client_library">PHP XML Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The PHP client-side library defines the PHP classes that can be (de)serialized to/from XML. The PHP client-side library defines the PHP classes that can be (de)serialized to/from XML.
This is useful for accessing the resources that are published by this application, but only This is useful for accessing the resources that are published by this application, but only
...@@ -251,7 +251,7 @@ ...@@ -251,7 +251,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_ruby_json_client_library">Ruby JSON Client Library</h3> <h3 id="artifact_ruby_json_client_library">Ruby JSON Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The Ruby JSON client-side library defines the Ruby classes that can be (de)serialized to/from JSON. The Ruby JSON client-side library defines the Ruby classes that can be (de)serialized to/from JSON.
This is useful for accessing the REST endpoints that are published by this application, but only This is useful for accessing the REST endpoints that are published by this application, but only
......
...@@ -72,7 +72,7 @@ ...@@ -72,7 +72,7 @@
<h1 class="page-header">Files and Libraries</h1> <h1 class="page-header">Files and Libraries</h1>
<h3 id="artifact_gwt_json_overlay">GWT JSON Overlay</h3> <h3 id="artifact_gwt_json_overlay">GWT JSON Overlay</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p> <p> <p> <p>
The <a href="http://code.google.com/webtoolkit/">Google Web Toolkit</a> JSON Overlay library provides the JSON Overlays that The <a href="http://code.google.com/webtoolkit/">Google Web Toolkit</a> JSON Overlay library provides the JSON Overlays that
can be used to access the Web service API for this application. can be used to access the Web service API for this application.
...@@ -97,7 +97,7 @@ ...@@ -97,7 +97,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_java_json_client_library">Java JSON Client Library</h3> <h3 id="artifact_java_json_client_library">Java JSON Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The Java client-side library is used to provide the set of Java objects that can be serialized The Java client-side library is used to provide the set of Java objects that can be serialized
to/from JSON using <a href="http://jackson.codehaus.org/">Jackson</a>. This is useful for accessing the to/from JSON using <a href="http://jackson.codehaus.org/">Jackson</a>. This is useful for accessing the
...@@ -127,7 +127,7 @@ ...@@ -127,7 +127,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_java_xml_client_library">Java XML Client Library</h3> <h3 id="artifact_java_xml_client_library">Java XML Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The Java client-side library is used to access the Web service API for this application using Java. The Java client-side library is used to access the Web service API for this application using Java.
</p> </p>
...@@ -155,7 +155,7 @@ ...@@ -155,7 +155,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_js_client_library">JavaScript Client Library</h3> <h3 id="artifact_js_client_library">JavaScript Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The JavaScript client-side library defines classes that can be (de)serialized to/from JSON. The JavaScript client-side library defines classes that can be (de)serialized to/from JSON.
This is useful for accessing the resources that are published by this application, but only This is useful for accessing the resources that are published by this application, but only
...@@ -190,7 +190,7 @@ ...@@ -190,7 +190,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_php_json_client_library">PHP JSON Client Library</h3> <h3 id="artifact_php_json_client_library">PHP JSON Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The PHP JSON client-side library defines the PHP classes that can be (de)serialized to/from JSON. The PHP JSON client-side library defines the PHP classes that can be (de)serialized to/from JSON.
This is useful for accessing the resources that are published by this application, but only This is useful for accessing the resources that are published by this application, but only
...@@ -219,7 +219,7 @@ ...@@ -219,7 +219,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_php_xml_client_library">PHP XML Client Library</h3> <h3 id="artifact_php_xml_client_library">PHP XML Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The PHP client-side library defines the PHP classes that can be (de)serialized to/from XML. The PHP client-side library defines the PHP classes that can be (de)serialized to/from XML.
This is useful for accessing the resources that are published by this application, but only This is useful for accessing the resources that are published by this application, but only
...@@ -251,7 +251,7 @@ ...@@ -251,7 +251,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_ruby_json_client_library">Ruby JSON Client Library</h3> <h3 id="artifact_ruby_json_client_library">Ruby JSON Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The Ruby JSON client-side library defines the Ruby classes that can be (de)serialized to/from JSON. The Ruby JSON client-side library defines the Ruby classes that can be (de)serialized to/from JSON.
This is useful for accessing the REST endpoints that are published by this application, but only This is useful for accessing the REST endpoints that are published by this application, but only
......
This diff is collapsed.
...@@ -56,6 +56,7 @@ if not getattr(logger, 'handler_set', None): ...@@ -56,6 +56,7 @@ if not getattr(logger, 'handler_set', None):
retry=0 retry=0
retry_count = 20 retry_count = 20
tasks_done = {}
#cwd = os.getcwd() #cwd = os.getcwd()
falied_playbook_path='/tmp/falied_playbook.yml' falied_playbook_path='/tmp/falied_playbook.yml'
...@@ -65,7 +66,7 @@ def install_prerequisites(vm,return_dict): ...@@ -65,7 +66,7 @@ def install_prerequisites(vm,return_dict):
logger.info("Installing ansible prerequisites on: "+vm.ip) logger.info("Installing ansible prerequisites on: "+vm.ip)
ssh = paramiko.SSHClient() ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(vm.ip, username=vm.user, key_filename=vm.key) ssh.connect(vm.ip, username=vm.user, key_filename=vm.key,timeout=5)
sftp = ssh.open_sftp() sftp = ssh.open_sftp()
file_path = os.path.dirname(os.path.abspath(__file__)) file_path = os.path.dirname(os.path.abspath(__file__))
sftp.chdir('/tmp/') sftp.chdir('/tmp/')
...@@ -126,26 +127,29 @@ def create_faied_playbooks(failed_tasks,playbook_path): ...@@ -126,26 +127,29 @@ def create_faied_playbooks(failed_tasks,playbook_path):
hosts +=host+"," hosts +=host+","
if task_name == 'setup': if task_name == 'setup':
found_first_failed_task = True found_first_failed_task = True
else: else:
found_first_failed_task = False found_first_failed_task = False
logger.info("First faield task: \'"+task_name+ "\'. Host: "+ host)
for play in plays: for play in plays:
for task in play['tasks']: for task in play['tasks']:
if found_first_failed_task: if found_first_failed_task:
retry_task.append(task) retry_task.append(task)
else: else:
if task_name in tasks:
host_done = tasks_done[task_name]
if host_done == host or host_done == 'all':
logger.info("Task: \'"+task_name+ "\'. on host: "+ host+ " already done. Skipping" )
continue
if 'name' in task and task['name'] == task_name: if 'name' in task and task['name'] == task_name:
retry_task.append(task) retry_task.append(task)
logger.info("First faield task: \'"+task_name+ "\'. Host: "+ host)
found_first_failed_task = True found_first_failed_task = True
elif task_name in task: elif task_name in task:
retry_task.append(task) retry_task.append(task)
logger.info("First faield task: \'"+task_name+ "\'. Host: "+ host)
found_first_failed_task = True found_first_failed_task = True
...@@ -200,6 +204,8 @@ def execute_playbook(hosts, playbook_path,user,ssh_key_file,extra_vars,passwords ...@@ -200,6 +204,8 @@ def execute_playbook(hosts, playbook_path,user,ssh_key_file,extra_vars,passwords
#failed_tasks.append(res) #failed_tasks.append(res)
resp = json.dumps({"host":res['ip'], "result":res['result']._result,"task":res['task']}) resp = json.dumps({"host":res['ip'], "result":res['result']._result,"task":res['task']})
logger.info("Task: "+res['task'] + ". Host: "+ res['ip'] +". State: ok") logger.info("Task: "+res['task'] + ". Host: "+ res['ip'] +". State: ok")
global tasks_done
tasks_done[res['task']]= res['ip']
answer.append({"host":res['ip'], "result":res['result']._result,"task":res['task']}) answer.append({"host":res['ip'], "result":res['result']._result,"task":res['task']})
...@@ -215,6 +221,7 @@ def execute_playbook(hosts, playbook_path,user,ssh_key_file,extra_vars,passwords ...@@ -215,6 +221,7 @@ def execute_playbook(hosts, playbook_path,user,ssh_key_file,extra_vars,passwords
#failed_tasks.append(res['result']) #failed_tasks.append(res['result'])
resp = json.dumps({"host":res['ip'], "result":res['result']._result, "task":res['task']}) resp = json.dumps({"host":res['ip'], "result":res['result']._result, "task":res['task']})
logger.error("Task: "+res['task'] + ". Host: "+ res['ip'] +". State: host_failed") logger.error("Task: "+res['task'] + ". Host: "+ res['ip'] +". State: host_failed")
logger.error(resp)
answer.append({"host":res['ip'], "result":res['result']._result,"task":res['task']}) answer.append({"host":res['ip'], "result":res['result']._result,"task":res['task']})
return answer,failed_tasks return answer,failed_tasks
...@@ -226,11 +233,14 @@ def run(vm_list,playbook_path,rabbitmq_host,owner): ...@@ -226,11 +233,14 @@ def run(vm_list,playbook_path,rabbitmq_host,owner):
ssh_key_file="" ssh_key_file=""
rabbit = DRIPLoggingHandler(host=rabbitmq_host, port=5672,user=owner) rabbit = DRIPLoggingHandler(host=rabbitmq_host, port=5672,user=owner)
logger.addHandler(rabbit) logger.addHandler(rabbit)
logger.info("DRIPLogging host: \'"+str(rabbitmq_host)+ "\'"+" logging message owner: \'"+owner+"\'")
manager = multiprocessing.Manager() manager = multiprocessing.Manager()
return_dict = manager.dict() return_dict = manager.dict()
jobs = [] jobs = []
if os.path.exists(falied_playbook_path): if os.path.exists(falied_playbook_path):
os.remove(falied_playbook_path) os.remove(falied_playbook_path)
...@@ -251,7 +261,6 @@ def run(vm_list,playbook_path,rabbitmq_host,owner): ...@@ -251,7 +261,6 @@ def run(vm_list,playbook_path,rabbitmq_host,owner):
passwords = {} passwords = {}
logger.info("Executing playbook: " + (playbook_path)) logger.info("Executing playbook: " + (playbook_path))
answer,failed_tasks = execute_playbook(hosts,playbook_path,user,ssh_key_file,extra_vars,passwords) answer,failed_tasks = execute_playbook(hosts,playbook_path,user,ssh_key_file,extra_vars,passwords)
failed_playsbooks = [] failed_playsbooks = []
...@@ -264,15 +273,20 @@ def run(vm_list,playbook_path,rabbitmq_host,owner): ...@@ -264,15 +273,20 @@ def run(vm_list,playbook_path,rabbitmq_host,owner):
task_name = str(failed_task._task.get_name()) task_name = str(failed_task._task.get_name())
retry_setup = 0 retry_setup = 0
while task_name == 'setup' and retry_setup < retry_count : while task_name and task_name == 'setup' and retry_setup < retry_count :
retry_setup+=1 retry_setup+=1
answer,failed_tasks = execute_playbook(hosts,playbook_path,user,ssh_key_file,extra_vars,passwords) answer,failed_tasks = execute_playbook(hosts,playbook_path,user,ssh_key_file,extra_vars,passwords)
if failed_tasks:
failed = failed_tasks[0] failed = failed_tasks[0]
failed_task = failed['task'] failed_task = failed['task']
if isinstance(failed_task, ansible.parsing.yaml.objects.AnsibleUnicode) or isinstance(failed_task, unicode) or isinstance(failed_task,str): if isinstance(failed_task, ansible.parsing.yaml.objects.AnsibleUnicode) or isinstance(failed_task, unicode) or isinstance(failed_task,str):
task_name = str(failed_task) task_name = str(failed_task)
else: else:
task_name = str(failed_task._task.get_name()) task_name = str(failed_task._task.get_name())
else:
task_name = None
while not failed_playsbooks: while not failed_playsbooks:
failed_playsbooks = create_faied_playbooks(failed_tasks,playbook_path) failed_playsbooks = create_faied_playbooks(failed_tasks,playbook_path)
...@@ -280,15 +294,22 @@ def run(vm_list,playbook_path,rabbitmq_host,owner): ...@@ -280,15 +294,22 @@ def run(vm_list,playbook_path,rabbitmq_host,owner):
for failed_playbook in failed_playsbooks: for failed_playbook in failed_playsbooks:
hosts = failed_playbook[0]['hosts'] hosts = failed_playbook[0]['hosts']
logger.info("Writing new playbook at : \'"+falied_playbook_path+ "\'")
with open(falied_playbook_path, 'w') as outfile: with open(falied_playbook_path, 'w') as outfile:
yaml.dump(failed_playbook, outfile) yaml.dump(failed_playbook, outfile)
retry_failed_tasks = 0 retry_failed_tasks = 0
while retry_failed_tasks < retry_count and failed_tasks: failed_tasks = None
done = False
while not done:
logger.info("Executing playbook : " + (falied_playbook_path) +" in host: "+hosts+" Retries: "+str(retry_failed_tasks)) logger.info("Executing playbook : " + (falied_playbook_path) +" in host: "+hosts+" Retries: "+str(retry_failed_tasks))
answer,failed_tasks = execute_playbook(hosts,falied_playbook_path,user,ssh_key_file,extra_vars,passwords) answer,failed_tasks = execute_playbook(hosts,falied_playbook_path,user,ssh_key_file,extra_vars,passwords)
retry_failed_tasks+=1 retry_failed_tasks+=1
if retry_failed_tasks > retry_count or not failed_tasks:
retry_failed_tasks = 0 retry_failed_tasks = 0
done = True
break
if os.path.exists(falied_playbook_path): if os.path.exists(falied_playbook_path):
os.remove(falied_playbook_path) os.remove(falied_playbook_path)
......
...@@ -51,11 +51,11 @@ def docker_check(vm, compose_name): ...@@ -51,11 +51,11 @@ def docker_check(vm, compose_name):
paramiko.util.log_to_file("deployment.log") paramiko.util.log_to_file("deployment.log")
ssh = paramiko.SSHClient() ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(vm.ip, username=vm.user, key_filename=vm.key) ssh.connect(vm.ip, username=vm.user, key_filename=vm.key, timeout=5)
node_format = '\'{\"ID\":\"{{.ID}}\",\"hostname\":\"{{.Hostname}}\",\"status\":\"{{.Status}}\",\"availability\":\"{{.Availability}}\",\"status\":\"{{.Status}}\"}\'' node_format = '\'{\"ID\":\"{{.ID}}\",\"hostname\":\"{{.Hostname}}\",\"status\":\"{{.Status}}\",\"availability\":\"{{.Availability}}\",\"status\":\"{{.Status}}\"}\''
cmd = 'sudo docker node ls --format ' + (node_format) cmd = 'sudo docker node ls --format ' + (node_format)
logger.info("Sending :"+cmd)
json_response = {} json_response = {}
cluster_node_info = [] cluster_node_info = []
stdin, stdout, stderr = ssh.exec_command(cmd) stdin, stdout, stderr = ssh.exec_command(cmd)
...@@ -73,7 +73,7 @@ def docker_check(vm, compose_name): ...@@ -73,7 +73,7 @@ def docker_check(vm, compose_name):
services_format = '\'{\"ID\":\"{{.ID}}\",\"name\":\"{{.Name}}\",\"image\":\"{{.Image}}\",\"node\":\"{{.Node}}\",\"desired_state\":\"{{.DesiredState}}\",\"current_state\":\"{{.CurrentState}}\",\"error\":\"{{.Error}}\",\"ports\":\"{{.Ports}}\"}\'' services_format = '\'{\"ID\":\"{{.ID}}\",\"name\":\"{{.Name}}\",\"image\":\"{{.Image}}\",\"node\":\"{{.Node}}\",\"desired_state\":\"{{.DesiredState}}\",\"current_state\":\"{{.CurrentState}}\",\"error\":\"{{.Error}}\",\"ports\":\"{{.Ports}}\"}\''
cmd = 'sudo docker stack ps '+ compose_name +' --format ' + services_format cmd = 'sudo docker stack ps '+ compose_name +' --format ' + services_format
logger.info("Got response running \"docker stack ps\"") logger.info("Sending :"+cmd)
stdin, stdout, stderr = ssh.exec_command(cmd) stdin, stdout, stderr = ssh.exec_command(cmd)
stack_ps_resp = stdout.readlines() stack_ps_resp = stdout.readlines()
services_info = [] services_info = []
...@@ -99,6 +99,7 @@ def docker_check(vm, compose_name): ...@@ -99,6 +99,7 @@ def docker_check(vm, compose_name):
stack_format = '\'{"ID":"{{.ID}}","name":"{{.Name}}","mode":"{{.Mode}}","replicas":"{{.Replicas}}","image":"{{.Image}}"}\'' stack_format = '\'{"ID":"{{.ID}}","name":"{{.Name}}","mode":"{{.Mode}}","replicas":"{{.Replicas}}","image":"{{.Image}}"}\''
cmd = 'sudo docker stack services '+ compose_name +' --format ' + (stack_format) cmd = 'sudo docker stack services '+ compose_name +' --format ' + (stack_format)
logger.info("Sending :"+cmd)
stdin, stdout, stderr = ssh.exec_command(cmd) stdin, stdout, stderr = ssh.exec_command(cmd)
logger.info("Got response running \"docker stack services\"") logger.info("Got response running \"docker stack services\"")
stack_resp = stdout.readlines() stack_resp = stdout.readlines()
...@@ -116,8 +117,8 @@ def docker_check(vm, compose_name): ...@@ -116,8 +117,8 @@ def docker_check(vm, compose_name):
cmd = 'sudo docker node inspect ' cmd = 'sudo docker node inspect '
for hostname in nodes_hostname: for hostname in nodes_hostname:
cmd += ' '+hostname cmd += ' '+hostname
logger.info("Sending :"+cmd)
stdin, stdout, stderr = ssh.exec_command(cmd) stdin, stdout, stderr = ssh.exec_command(cmd)
logger.info("Got response running \"docker node inspect\"")
inspect_resp = stdout.readlines() inspect_resp = stdout.readlines()
response_str = "" response_str = ""
...@@ -135,6 +136,7 @@ def docker_check(vm, compose_name): ...@@ -135,6 +136,7 @@ def docker_check(vm, compose_name):
for id in services_ids: for id in services_ids:
cmd += ' '+id cmd += ' '+id
logger.info("Sending :"+cmd)
stdin, stdout, stderr = ssh.exec_command(cmd) stdin, stdout, stderr = ssh.exec_command(cmd)
logger.info("Got response running \"docker inspect\"") logger.info("Got response running \"docker inspect\"")
inspect_resp = stdout.readlines() inspect_resp = stdout.readlines()
......
...@@ -41,7 +41,7 @@ def deploy_compose(vm, compose_file, compose_name,docker_login): ...@@ -41,7 +41,7 @@ def deploy_compose(vm, compose_file, compose_name,docker_login):
paramiko.util.log_to_file("deployment.log") paramiko.util.log_to_file("deployment.log")
ssh = paramiko.SSHClient() ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(vm.ip, username=vm.user, key_filename=vm.key) ssh.connect(vm.ip, username=vm.user, key_filename=vm.key, timeout=5)
sftp = ssh.open_sftp() sftp = ssh.open_sftp()
sftp.chdir('/tmp/') sftp.chdir('/tmp/')
sftp.put(compose_file, "docker-compose.yml") sftp.put(compose_file, "docker-compose.yml")
...@@ -56,10 +56,13 @@ def deploy_compose(vm, compose_file, compose_name,docker_login): ...@@ -56,10 +56,13 @@ def deploy_compose(vm, compose_file, compose_name,docker_login):
#stdin, stdout, stderr = ssh.exec_command("sudo docker stack rm %s" % (compose_name)) #stdin, stdout, stderr = ssh.exec_command("sudo docker stack rm %s" % (compose_name))
#stdout.read() #stdout.read()
#err = stderr.read() #err = stderr.read()
stdin, stdout, stderr = ssh.exec_command("sudo docker stack deploy --with-registry-auth --compose-file /tmp/docker-compose.yml %s" % (compose_name)) cmd = "sudo docker stack deploy --with-registry-auth --compose-file /tmp/docker-compose.yml "+compose_name
stdout.read() logger.info("Sendding : "+cmd)
stdin, stdout, stderr = ssh.exec_command(cmd)
out = stdout.read()
err = stderr.read() err = stderr.read()
logger.info("stderr from: "+vm.ip + " "+ err) logger.info("stderr from: "+vm.ip + " "+ err)
logger.info("stdout from: "+vm.ip + " "+ out)
logger.info("Finished docker compose deployment on: "+vm.ip) logger.info("Finished docker compose deployment on: "+vm.ip)
except Exception as e: except Exception as e:
global retry global retry
......
...@@ -41,7 +41,7 @@ def install_engine(vm,return_dict): ...@@ -41,7 +41,7 @@ def install_engine(vm,return_dict):
paramiko.util.log_to_file("deployment.log") paramiko.util.log_to_file("deployment.log")
ssh = paramiko.SSHClient() ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(vm.ip, username=vm.user, key_filename=vm.key) ssh.connect(vm.ip, username=vm.user, key_filename=vm.key , timeout=5)
stdin, stdout, stderr = ssh.exec_command("sudo dpkg --get-selections | grep docker") stdin, stdout, stderr = ssh.exec_command("sudo dpkg --get-selections | grep docker")
temp_list = stdout.readlines() temp_list = stdout.readlines()
temp_str = "" temp_str = ""
......
...@@ -43,7 +43,7 @@ def install_manager(vm): ...@@ -43,7 +43,7 @@ def install_manager(vm):
paramiko.util.log_to_file("deployment.log") paramiko.util.log_to_file("deployment.log")
ssh = paramiko.SSHClient() ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(vm.ip, username=vm.user, key_filename=vm.key) ssh.connect(vm.ip, username=vm.user, key_filename=vm.key,timeout=5)
stdin, stdout, stderr = ssh.exec_command("sudo docker info | grep 'Swarm'") stdin, stdout, stderr = ssh.exec_command("sudo docker info | grep 'Swarm'")
temp_list1 = stdout.readlines() temp_list1 = stdout.readlines()
stdin, stdout, stderr = ssh.exec_command("sudo docker info | grep 'Is Manager'") stdin, stdout, stderr = ssh.exec_command("sudo docker info | grep 'Is Manager'")
...@@ -79,7 +79,7 @@ def install_worker(join_cmd, vm,return_dict): ...@@ -79,7 +79,7 @@ def install_worker(join_cmd, vm,return_dict):
paramiko.util.log_to_file("deployment.log") paramiko.util.log_to_file("deployment.log")
ssh = paramiko.SSHClient() ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(vm.ip, username=vm.user, key_filename=vm.key) ssh.connect(vm.ip, username=vm.user, key_filename=vm.key,timeout=5)
stdin, stdout, stderr = ssh.exec_command("sudo docker info | grep 'Swarm'") stdin, stdout, stderr = ssh.exec_command("sudo docker info | grep 'Swarm'")
temp_list1 = stdout.readlines() temp_list1 = stdout.readlines()
if temp_list1[0].find("Swarm: active") != -1: if temp_list1[0].find("Swarm: active") != -1:
......
...@@ -41,10 +41,10 @@ class DRIPLoggingHandler(RabbitMQHandler): ...@@ -41,10 +41,10 @@ class DRIPLoggingHandler(RabbitMQHandler):
if not self.connection or self.connection.is_closed or not self.channel or self.channel.is_closed: if not self.connection or self.connection.is_closed or not self.channel or self.channel.is_closed:
self.open_connection() self.open_connection()
queue='log_qeue_' + self.user
self.channel.basic_publish( self.channel.basic_publish(
exchange='', exchange='',
routing_key='log_qeue_user', routing_key=queue,
body=self.format(record), body=self.format(record),
properties=pika.BasicProperties( properties=pika.BasicProperties(
delivery_mode=2) delivery_mode=2)
......
---
publicKeyPath: "name@id_rsa.pub"
userName: "vm_user"
topologies:
- topology: "egi-level-1"
cloudProvider: "EGI"
domain: "CESNET"
status: "fresh"
tag: "fixed"
statusInfo: null
copyOf: null
sshKeyPairId: null
connections: null
---
subnets: null
components:
- name: "nodeA"
type: "switch/compute"
nodeType: "medium"
OStype: "Ubuntu 16.04"
script: null
role: master
dockers: null
publicAddress: null
ethernetPort: null
VMResourceID: null
...@@ -25,13 +25,30 @@ from os.path import expanduser ...@@ -25,13 +25,30 @@ from os.path import expanduser
home = expanduser("~") home = expanduser("~")
playbook_path=home+"/Downloads/playbook.yml" playbook_path=home+"/Downloads/playbook.yml"
ip = "147.228.242.81" playbook_path=sys.argv[1] #home+"/Downloads/playbook.yml"
ip = "147.228.242.97"
ip = sys.argv[2] #"147.228.242.97"
user="vm_user" user="vm_user"
role = "master" role = "master"
ssh_key_file=home+"/Downloads/id_rsa" ssh_key_file=home+"/Downloads/id_rsa"
ssh_key_file = sys.argv[3] #home+"/Downloads/id_rsa"
vm_list = set() vm_list = set()
vm = VmInfo(ip, user, ssh_key_file, role) vm = VmInfo(ip, user, ssh_key_file, role)
vm_list.add(vm) vm_list.add(vm)
ret = ansible_playbook.run(vm_list,playbook_path,"localhost","owner")
\ No newline at end of file rabbit_mq_host = sys.argv[4] #rabbit_mq_host
print sys.argv
print "playbook_path: "+playbook_path
print "ip: "+ip
print "ssh_key_file: "+ssh_key_file
print "rabbit_mq_host: "+rabbit_mq_host
ret = ansible_playbook.run(vm_list,playbook_path,rabbit_mq_host,"owner")
\ No newline at end of file
-----BEGIN CERTIFICATE-----\nMIILUjCCCjqgAwIBAgIEUtiPbTANBgkqhkiG9w0BAQsFADCBmzETMBEGCgmSJomT\n8ixkARkWA29yZzEWMBQGCgmSJomT8ixkARkWBnRlcmVuYTETMBEGCgmSJomT8ixk\nARkWA3RjczELMAkGA1UEBhMCTkwxIzAhBgNVBAoTGlVuaXZlcnNpdGVpdCB2YW4g\nQW1zdGVyZGFtMSUwIwYDVQQDDBxTLiBLb3Vsb3V6aXMgc2tvdWxvdTFAdXZhLm5s\nMB4XDTE3MTIyMDE3MDUzOVoXDTE3MTIyMTA1MTAzOVowgbAxEzARBgoJkiaJk/Is\nZAEZFgNvcmcxFjAUBgoJkiaJk/IsZAEZFgZ0ZXJlbmExEzARBgoJkiaJk/IsZAEZ\nFgN0Y3MxCzAJBgNVBAYTAk5MMSMwIQYDVQQKExpVbml2ZXJzaXRlaXQgdmFuIEFt\nc3RlcmRhbTElMCMGA1UEAwwcUy4gS291bG91emlzIHNrb3Vsb3UxQHV2YS5ubDET\nMBEGA1UEAxMKMTM4OTkyNDIwNTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA\nzQL++YyA43yvhsgWhFW2tphy1LD1gH7IYGgKDz3EmK1SPusYE2VUj10r+JEGamp6\nPvbR6yE2G5Ej9cLHj7/lsDWta1q4pOtYBbVmtWDW34uyngvQd6DDZweJ8usaJ5bS\noVBOQQDuF3bWc21jjLWl/RrX7TlgkgpN2FIl213d/PcCAwEAAaOCCAkwgggFMIIH\nowYKKwYBBAG+RWRkBQSCB5MwggePMIIHizCCB4cwggZvAgEBMIG5oIG2MIGhpIGe\nMIGbMRMwEQYKCZImiZPyLGQBGRYDb3JnMRYwFAYKCZImiZPyLGQBGRYGdGVyZW5h\nMRMwEQYKCZImiZPyLGQBGRYDdGNzMQswCQYDVQQGEwJOTDEjMCEGA1UEChMaVW5p\ndmVyc2l0ZWl0IHZhbiBBbXN0ZXJkYW0xJTAjBgNVBAMMHFMuIEtvdWxvdXppcyBz\na291bG91MUB1dmEubmwCEAr1c7q4kKJ79gVb6q/wT1GgZTBjpGEwXzESMBAGCgmS\nJomT8ixkARkWAmN6MRkwFwYKCZImiZPyLGQBGRYJY2VzbmV0LWNhMQ8wDQYDVQQK\nDAZDRVNORVQxHTAbBgNVBAMMFHZvbXMyLmdyaWQuY2VzbmV0LmN6MA0GCSqGSIb3\nDQEBBQUAAhEAs5BdpusPQqqoPm2Dp7fRUzAiGA8yMDE3MTIyMDE3MTAzOVoYDzIw\nMTcxMjIxMDUxMDM5WjBwMG4GCisGAQQBvkVkZAQxYDBeoC6GLGZlZGNsb3VkLmVn\naS5ldTovL3ZvbXMyLmdyaWQuY2VzbmV0LmN6OjE1MDAyMCwEKi9mZWRjbG91ZC5l\nZ2kuZXUvUm9sZT1OVUxML0NhcGFiaWxpdHk9TlVMTDCCBI0wggRdBgorBgEEAb5F\nZGQKBIIETTCCBEkwggRFMIIEQTCCAymgAwIBAgIICE3qxIWQapQwDQYJKoZIhvcN\nAQEFBQAwWTESMBAGCgmSJomT8ixkARkWAmN6MRkwFwYKCZImiZPyLGQBGRYJY2Vz\nbmV0LWNhMRIwEAYDVQQKDAlDRVNORVQgQ0ExFDASBgNVBAMMC0NFU05FVCBDQSAz\nMB4XDTE3MTEyMTEwNTUxM1oXDTE4MTIyMTEwNTUxM1owXzESMBAGCgmSJomT8ixk\nARkWAmN6MRkwFwYKCZImiZPyLGQBGRYJY2VzbmV0LWNhMQ8wDQYDVQQKDAZDRVNO\nRVQxHTAbBgNVBAMMFHZvbXMyLmdyaWQuY2VzbmV0LmN6MIIBIjANBgkqhkiG9w0B\nAQEFAAOCAQ8AMIIBCgKCAQEAyBhkDJuohJMmEtsKzQeNWwLEUAH9sMqOpBNBHzP8\nBdJ5fvk/lo19g75qoxr3gGEQGmylMv/VshLDJAnnJum1uO+xNps9D1DdUfuLvRVM\nPQGAUD7S7Upx+5A/kKacxifpoLUIHPSLb+bJXHc4G2grUDxdJBIhDm1TF7zozOYd\nl/uadrflN5ad6nmoCc8ZCQTD9nXzfkgr8lI4G408ZzbGWQV3TNxnPZvT3P1x9wAq\nsnm6QcDAlq//VtPwvxbW+q8X7Oldzif9C88VKI8HbIEcxb/Tl1QfLH30W70MgP/Y\n0xdCXBJOThHq6czFutFZcIGVCayu8hTS6qVzB1Q0a09+LQIDAQABo4IBBTCCAQEw\nHQYDVR0OBBYEFOL7dRC2Emf8ykYDEytqKgQVwAcsMAwGA1UdEwEB/wQCMAAwHwYD\nVR0jBBgwFoAU9V0/vJiZix/xSOf+R4dxCaLcukUwJwYDVR0gBCAwHjAMBgoqhkiG\n90wFAgIBMA4GDCsGAQQBvnkBAgIDATA4BgNVHR8EMTAvMC2gK6AphidodHRwOi8v\nY3JsLmNlc25ldC1jYS5jei9DRVNORVRfQ0FfMy5jcmwwDgYDVR0PAQH/BAQDAgWg\nMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAfBgNVHREEGDAWghR2b21z\nMi5ncmlkLmNlc25ldC5jejANBgkqhkiG9w0BAQUFAAOCAQEANENtGHcyD9D/Yjh2\nhsWItKHv2/j7hxjtn1KrDzKvfideAO9fsS9YJtcMuoAKU/41LZBMRiTIEcroK1fg\nQU/xrVauarTNoKqRS8vt7M+YV9hyaSTp7moOHatbB07NmacAgbdntM0lN9yBeX0/\ntZaCFztA6wE0ZmVbyoiJ31+Re7ksnEmYNsR4PedeOQwskU49XlYLVMQzFndfcR9d\nYICmuXwR1QcIpORAcwXSvgdRsU9/xbHu71+pix73NcGEaOASWqXiXvOftcTTYmr0\nadP9aV8VxbSBX0rFsba/WN8QG/X7AoqhgnQjoqHXIogmYDHDTnFQzZMywrS36cah\nC/AGnzAJBgNVHTgEAgUAMB8GA1UdIwQYMBaAFOL7dRC2Emf8ykYDEytqKgQVwAcs\nMA0GCSqGSIb3DQEBBQUAA4IBAQAeAFjA4P+aJ6LVji7EQB9cU0n6jLDcboNA2i1A\nootSCXl9LwO+CHnV4+FPANKIH7GkX8LT260/4q8hQfEQqj/vy/LKoLnWKLiqp6oL\nT1zS3Idjm47saBrv8UYk/WKCe+p4vTGYm6//b7l9DZMPqxzWyw0VEAJcImvCeqeX\nOVRBMZXXRuEMr+g9ha2pL1jS3PUb+BgMNlv0nDpmYBsaKKN/IcZcyfo6oISQDkqg\nnwJW3k5UbbyWo1jtMXRtSOw47JlZf8IoAzkLxNxs4lM0zLzp5y2MxWh+aqfh1T1n\nhy1ij+bOqWsI2ufPijy7ZWd2/2CJZw1BSNEojHwgKT2TWuucMA4GA1UdDwEB/wQE\nAwIEsDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFBmtsZQ7NOZGsl0k4nX64d0C\nIUKcMB0GCCsGAQUFBwEOAQH/BA4wDDAKBggrBgEFBQcVATANBgkqhkiG9w0BAQsF\nAAOCAQEAgMDXd1MRBFfZ6mMAGYK2Ou02ykbrWRQtPAb9YfMDYqGQLsK15jIF39qt\niWwxr840eoLHSp8g1P5lQjRiKp6naqAfxxtRavY5LBzVA+pqWgODUaYVCqex45W4\nH3Pt9lu+/X+NNhdeC+m8Jr/vZSvN1W9EfYotPBsbu6AGTx319Xz/vQaN5DU6+KbX\nJNoQ/iE+cZ0wTsRDT0Q+XlKMETIbHioh/ADzxSsxkUKppy3zV2cM+MSzptVAiL8E\njQauRaOSy38b3FIhXqKUimMA5rwbjHZ02SCVxkA5svT+ZUGzsT6j2H/FxataoGoy\nAaNqgiy9cBXcU9G1eKnuk2fd8LqLkw==\n-----END CERTIFICATE-----\n-----BEGIN RSA PRIVATE KEY-----\nMIICXQIBAAKBgQDNAv75jIDjfK+GyBaEVba2mHLUsPWAfshgaAoPPcSYrVI+6xgT\nZVSPXSv4kQZqano+9tHrITYbkSP1wsePv+WwNa1rWrik61gFtWa1YNbfi7KeC9B3\noMNnB4ny6xonltKhUE5BAO4XdtZzbWOMtaX9GtftOWCSCk3YUiXbXd389wIDAQAB\nAoGADyGKehahqKiaN+Nqrge5fY6Q4xvQctRoq5ziKS/Q48ffCx/E3iGbdR1WUnk5\ntP742M7UvXrtCGnU8p2Wpwhtxkq7mN8mUbcrX3Zn0/lMPyDlpAEdp8llqVl9HocM\nbdkgR3ibmhsmoZXSHbawFvIZ5hBypT6qU9zfvFgwVM6PjfkCQQD02WlEKJf8g3vu\nTshGlHgi+7ZgmvjCyiNvHeSjdLG4wdGoedMi5BrPpIouL9mrf6vbROe0cez/j65D\n16x8wiEzAkEA1lki6k6WxxQe/G1GU/1Cxu8QYZuYAPyPDxvffCaNBPWF4jPeZlXy\n5BMvpeM4Iwhn4c5QcfRitz9zC0C9hNc9LQJBAMX7BiMWr85+grcu/MIVSw7+eXmj\n1YGr8ProMPf6Y7oA/oY7+3069HLxmMm/50HE+jFShghiFkCO7VnuCorWbgECQQCB\nU+rDIIPMvhEsEOqcBnTh/qANpImEHt5aKWEgUUpIsbMEFnObn0Qb5I+dMYlPaeTz\n0z2qY9+j3P6WzYsLuapJAkBMYCWNWjVXGJc+eEA8eC7TwYg5JEcbGwpkfQLhFJvW\nLhzb3LJMEvu0ohOoAv1pCqBZAfPWVb9IIFj3vr7G75i3\n-----END RSA PRIVATE KEY-----\n-----BEGIN CERTIFICATE-----\nMIIFdTCCBF2gAwIBAgIQCvVzuriQonv2BVvqr/BPUTANBgkqhkiG9w0BAQsFADBy\nMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UEBxMJ\nQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBlU2Np\nZW5jZSBQZXJzb25hbCBDQSAzMB4XDTE3MDUxODAwMDAwMFoXDTE4MDYxNzEyMDAw\nMFowgZsxEzARBgoJkiaJk/IsZAEZFgNvcmcxFjAUBgoJkiaJk/IsZAEZFgZ0ZXJl\nbmExEzARBgoJkiaJk/IsZAEZFgN0Y3MxCzAJBgNVBAYTAk5MMSMwIQYDVQQKExpV\nbml2ZXJzaXRlaXQgdmFuIEFtc3RlcmRhbTElMCMGA1UEAwwcUy4gS291bG91emlz\nIHNrb3Vsb3UxQHV2YS5ubDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB\nAL0ldXsREl+e/7lqikqpbb/wk1HbckuE8tSE8taol5O7gHiLV2MjN153pgoJ8cEk\nSt57Qh/AzVCLU0IZYyeY5pYk0MVzXwqMcPek5ZlVN/p4mJxx97oLJaq9lptJriqc\nldAiV8sy2ckC6gGDFV2pu0orA6HhKYJ8UM56dKtnFKz4BQ9MLZN6ruSiAuqWUZJy\nBMzHhQZu5ya6GwfLmfHFJpJFmiZ7LwM7ji0njK6oOkEAVSWFISkRgS5/xZ3ZBmpe\noA47veHfSMn5TvQs3flSZZq0zq+xQuqVkQs6un5c32KaooN2A/QPe7vXX3UzqxMY\nm0LdRxbITBZUwtZKMcxZy6sCAwEAAaOCAdswggHXMB8GA1UdIwQYMBaAFIyfES7m\n43oEpR5Vi0YIBKbtl3CmMB0GA1UdDgQWBBQZrbGUOzTmRrJdJOJ1+uHdAiFCnDAM\nBgNVHRMBAf8EAjAAMB0GA1UdEQQWMBSBElMuS291bG91emlzQHV2YS5ubDAOBgNV\nHQ8BAf8EBAMCBLAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMEMDQGA1Ud\nIAQtMCswDAYKKoZIhvdMBQICATAMBgpghkgBhv1sBB8BMA0GCyqGSIb3TAUCAwMD\nMIGFBgNVHR8EfjB8MDygOqA4hjZodHRwOi8vY3JsMy5kaWdpY2VydC5jb20vVEVS\nRU5BZVNjaWVuY2VQZXJzb25hbENBMy5jcmwwPKA6oDiGNmh0dHA6Ly9jcmw0LmRp\nZ2ljZXJ0LmNvbS9URVJFTkFlU2NpZW5jZVBlcnNvbmFsQ0EzLmNybDB7BggrBgEF\nBQcBAQRvMG0wJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBF\nBggrBgEFBQcwAoY5aHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL1RFUkVOQWVT\nY2llbmNlUGVyc29uYWxDQTMuY3J0MA0GCSqGSIb3DQEBCwUAA4IBAQBjF6FSxMKF\nO3no2/2Bu1/ur4h6vIiKDHqQ6cxcgu9fvBbS6gX01Ov3y2SXHidJdlPf2f+nQMQv\nuo81wOZFGtLgN0SsanbOhhOm63kUIZh4TVMhTL5jJ1ybeVEv97E6iRlk5PwExGNH\nB2u9CDvt3A+cKXC4ieXJPuWnWBLbSGgM9JhlH7BW87hxhs9L0ZBAECPh4W0DbUmn\nBriCUMIw13cMQNvcgddlZ+t8+ABZGtBHjSRL2O6yJCBUWNv1eqlFqXP5I7vDF5ry\nsITe184PVPq9t26cCoGCwpbcxhqGdAmJt5LHNvFCd2pGO8QkdNwCJHNw/2HPWgje\nV5OqUdjlA+8R\n-----END CERTIFICATE-----
...@@ -82,7 +82,7 @@ def handle_delivery(message): ...@@ -82,7 +82,7 @@ def handle_delivery(message):
def test_local(): def test_local():
home = expanduser("~") home = expanduser("~")
transformer = DockerComposeTransformer(home+"/workspace/DRIP/docs/input_tosca_files/BEIA/BEIAv3.yml") transformer = DockerComposeTransformer(home+"/workspace/DRIP/docs/input_tosca_files/MOG/test_tosca2.yml")
vresion = '2'; vresion = '2';
compose = transformer.getnerate_compose(vresion) compose = transformer.getnerate_compose(vresion)
print yaml.dump(compose) print yaml.dump(compose)
......
...@@ -42,7 +42,7 @@ class DockerComposeTransformer: ...@@ -42,7 +42,7 @@ class DockerComposeTransformer:
docker_types = set([]) docker_types = set([])
node_types = self.get_node_types() node_types = self.get_node_types()
for node_type_key in node_types: for node_type_key in node_types:
if node_types[node_type_key] and isinstance(node_types[node_type_key],dict) and'derived_from' in node_types[node_type_key].keys(): if node_types[node_type_key] and isinstance(node_types[node_type_key],dict) and 'derived_from' in node_types[node_type_key].keys():
if node_types[node_type_key]['derived_from'] == self.DOCKER_TYPE: if node_types[node_type_key]['derived_from'] == self.DOCKER_TYPE:
docker_types.add(node_type_key) docker_types.add(node_type_key)
return docker_types return docker_types
...@@ -83,7 +83,7 @@ class DockerComposeTransformer: ...@@ -83,7 +83,7 @@ class DockerComposeTransformer:
port_maps = [] port_maps = []
if 'ports_mapping' in properties: if 'ports_mapping' in properties:
ports_mappings = properties['ports_mapping'] ports_mappings = properties['ports_mapping']
if ports_mappings: if ports_mappings and not isinstance(ports_mappings,str):
for port_map_key in ports_mappings: for port_map_key in ports_mappings:
port_map = '' port_map = ''
if isinstance(ports_mappings,dict): if isinstance(ports_mappings,dict):
...@@ -108,6 +108,11 @@ class DockerComposeTransformer: ...@@ -108,6 +108,11 @@ class DockerComposeTransformer:
# port_map[host_port] = container_port # port_map[host_port] = container_port
port_map = str(host_port)+':'+str(container_port) port_map = str(host_port)+':'+str(container_port)
port_maps.append(port_map) port_maps.append(port_map)
elif isinstance(ports_mappings,str):
host_port = ports_mappings.split(":")[0]
container_port = ports_mappings.split(":")[1]
port_map = str(host_port)+':'+str(container_port)
port_maps.append(port_map)
if 'in_ports' in properties: if 'in_ports' in properties:
ports_mappings = properties['in_ports'] ports_mappings = properties['in_ports']
for port_map_key in ports_mappings: for port_map_key in ports_mappings:
......
...@@ -41,10 +41,10 @@ class DRIPLoggingHandler(RabbitMQHandler): ...@@ -41,10 +41,10 @@ class DRIPLoggingHandler(RabbitMQHandler):
if not self.connection or self.connection.is_closed or not self.channel or self.channel.is_closed: if not self.connection or self.connection.is_closed or not self.channel or self.channel.is_closed:
self.open_connection() self.open_connection()
queue='log_qeue_' + self.user
self.channel.basic_publish( self.channel.basic_publish(
exchange='', exchange='',
routing_key='log_qeue_user', routing_key=queue,
body=self.format(record), body=self.format(record),
properties=pika.BasicProperties( properties=pika.BasicProperties(
delivery_mode=2) delivery_mode=2)
......
...@@ -70,7 +70,7 @@ class DumpPlanner: ...@@ -70,7 +70,7 @@ class DumpPlanner:
vm['name'] = node['id'] vm['name'] = node['id']
vm['type'] = self.COMPUTE_TYPE vm['type'] = self.COMPUTE_TYPE
if 'requirements' in node: if 'requirements' in node and node['requirements']:
for req in node['requirements']: for req in node['requirements']:
if 'host' in req and 'node_filter' in req['host']: if 'host' in req and 'node_filter' in req['host']:
vm['host'] = req['host']['node_filter']['capabilities']['host'] vm['host'] = req['host']['node_filter']['capabilities']['host']
......
...@@ -99,14 +99,16 @@ def handle_delivery(message): ...@@ -99,14 +99,16 @@ def handle_delivery(message):
return json.dumps(response) return json.dumps(response)
if __name__ == "__main__": if __name__ == "__main__":
if(sys.argv[1] == "test_local"):
home = expanduser("~") home = expanduser("~")
planner = DumpPlanner(home+"/workspace/DRIP/docs/input_tosca_files/mog_tosca_v1.yml") planner = DumpPlanner(home+"/workspace/DRIP/docs/input_tosca_files/MOG/test_tosca2.yml")
print planner.plan() print planner.plan()
# logger.info("Input args: " + sys.argv[0] + ' ' + sys.argv[1] + ' ' + sys.argv[2]) else:
# channel = init_chanel(sys.argv) logger.info("Input args: " + sys.argv[0] + ' ' + sys.argv[1] + ' ' + sys.argv[2])
# global queue_name channel = init_chanel(sys.argv)
# queue_name = sys.argv[2] global queue_name
# start(channel) queue_name = sys.argv[2]
start(channel)
# #
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment