Commit 9ecb25f7 authored by Spiros Koulouzis's avatar Spiros Koulouzis

added one more check to produce plan and docker-compose

parent 74c03bd5
...@@ -72,7 +72,7 @@ ...@@ -72,7 +72,7 @@
<h1 class="page-header">Files and Libraries</h1> <h1 class="page-header">Files and Libraries</h1>
<h3 id="artifact_gwt_json_overlay">GWT JSON Overlay</h3> <h3 id="artifact_gwt_json_overlay">GWT JSON Overlay</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p> <p> <p> <p>
The <a href="http://code.google.com/webtoolkit/">Google Web Toolkit</a> JSON Overlay library provides the JSON Overlays that The <a href="http://code.google.com/webtoolkit/">Google Web Toolkit</a> JSON Overlay library provides the JSON Overlays that
can be used to access the Web service API for this application. can be used to access the Web service API for this application.
...@@ -97,7 +97,7 @@ ...@@ -97,7 +97,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_java_json_client_library">Java JSON Client Library</h3> <h3 id="artifact_java_json_client_library">Java JSON Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The Java client-side library is used to provide the set of Java objects that can be serialized The Java client-side library is used to provide the set of Java objects that can be serialized
to/from JSON using <a href="http://jackson.codehaus.org/">Jackson</a>. This is useful for accessing the to/from JSON using <a href="http://jackson.codehaus.org/">Jackson</a>. This is useful for accessing the
...@@ -127,7 +127,7 @@ ...@@ -127,7 +127,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_java_xml_client_library">Java XML Client Library</h3> <h3 id="artifact_java_xml_client_library">Java XML Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The Java client-side library is used to access the Web service API for this application using Java. The Java client-side library is used to access the Web service API for this application using Java.
</p> </p>
...@@ -155,7 +155,7 @@ ...@@ -155,7 +155,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_js_client_library">JavaScript Client Library</h3> <h3 id="artifact_js_client_library">JavaScript Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The JavaScript client-side library defines classes that can be (de)serialized to/from JSON. The JavaScript client-side library defines classes that can be (de)serialized to/from JSON.
This is useful for accessing the resources that are published by this application, but only This is useful for accessing the resources that are published by this application, but only
...@@ -190,7 +190,7 @@ ...@@ -190,7 +190,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_php_json_client_library">PHP JSON Client Library</h3> <h3 id="artifact_php_json_client_library">PHP JSON Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The PHP JSON client-side library defines the PHP classes that can be (de)serialized to/from JSON. The PHP JSON client-side library defines the PHP classes that can be (de)serialized to/from JSON.
This is useful for accessing the resources that are published by this application, but only This is useful for accessing the resources that are published by this application, but only
...@@ -219,7 +219,7 @@ ...@@ -219,7 +219,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_php_xml_client_library">PHP XML Client Library</h3> <h3 id="artifact_php_xml_client_library">PHP XML Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The PHP client-side library defines the PHP classes that can be (de)serialized to/from XML. The PHP client-side library defines the PHP classes that can be (de)serialized to/from XML.
This is useful for accessing the resources that are published by this application, but only This is useful for accessing the resources that are published by this application, but only
...@@ -251,7 +251,7 @@ ...@@ -251,7 +251,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_ruby_json_client_library">Ruby JSON Client Library</h3> <h3 id="artifact_ruby_json_client_library">Ruby JSON Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The Ruby JSON client-side library defines the Ruby classes that can be (de)serialized to/from JSON. The Ruby JSON client-side library defines the Ruby classes that can be (de)serialized to/from JSON.
This is useful for accessing the REST endpoints that are published by this application, but only This is useful for accessing the REST endpoints that are published by this application, but only
......
...@@ -72,7 +72,7 @@ ...@@ -72,7 +72,7 @@
<h1 class="page-header">Files and Libraries</h1> <h1 class="page-header">Files and Libraries</h1>
<h3 id="artifact_gwt_json_overlay">GWT JSON Overlay</h3> <h3 id="artifact_gwt_json_overlay">GWT JSON Overlay</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p> <p> <p> <p>
The <a href="http://code.google.com/webtoolkit/">Google Web Toolkit</a> JSON Overlay library provides the JSON Overlays that The <a href="http://code.google.com/webtoolkit/">Google Web Toolkit</a> JSON Overlay library provides the JSON Overlays that
can be used to access the Web service API for this application. can be used to access the Web service API for this application.
...@@ -97,7 +97,7 @@ ...@@ -97,7 +97,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_java_json_client_library">Java JSON Client Library</h3> <h3 id="artifact_java_json_client_library">Java JSON Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The Java client-side library is used to provide the set of Java objects that can be serialized The Java client-side library is used to provide the set of Java objects that can be serialized
to/from JSON using <a href="http://jackson.codehaus.org/">Jackson</a>. This is useful for accessing the to/from JSON using <a href="http://jackson.codehaus.org/">Jackson</a>. This is useful for accessing the
...@@ -127,7 +127,7 @@ ...@@ -127,7 +127,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_java_xml_client_library">Java XML Client Library</h3> <h3 id="artifact_java_xml_client_library">Java XML Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The Java client-side library is used to access the Web service API for this application using Java. The Java client-side library is used to access the Web service API for this application using Java.
</p> </p>
...@@ -155,7 +155,7 @@ ...@@ -155,7 +155,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_js_client_library">JavaScript Client Library</h3> <h3 id="artifact_js_client_library">JavaScript Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The JavaScript client-side library defines classes that can be (de)serialized to/from JSON. The JavaScript client-side library defines classes that can be (de)serialized to/from JSON.
This is useful for accessing the resources that are published by this application, but only This is useful for accessing the resources that are published by this application, but only
...@@ -190,7 +190,7 @@ ...@@ -190,7 +190,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_php_json_client_library">PHP JSON Client Library</h3> <h3 id="artifact_php_json_client_library">PHP JSON Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The PHP JSON client-side library defines the PHP classes that can be (de)serialized to/from JSON. The PHP JSON client-side library defines the PHP classes that can be (de)serialized to/from JSON.
This is useful for accessing the resources that are published by this application, but only This is useful for accessing the resources that are published by this application, but only
...@@ -219,7 +219,7 @@ ...@@ -219,7 +219,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_php_xml_client_library">PHP XML Client Library</h3> <h3 id="artifact_php_xml_client_library">PHP XML Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The PHP client-side library defines the PHP classes that can be (de)serialized to/from XML. The PHP client-side library defines the PHP classes that can be (de)serialized to/from XML.
This is useful for accessing the resources that are published by this application, but only This is useful for accessing the resources that are published by this application, but only
...@@ -251,7 +251,7 @@ ...@@ -251,7 +251,7 @@
</tbody> </tbody>
</table> </table>
<h3 id="artifact_ruby_json_client_library">Ruby JSON Client Library</h3> <h3 id="artifact_ruby_json_client_library">Ruby JSON Client Library</h3>
<p class="lead">Created December 20, 2017</p> <p class="lead">Created December 22, 2017</p>
<p><p> <p><p>
The Ruby JSON client-side library defines the Ruby classes that can be (de)serialized to/from JSON. The Ruby JSON client-side library defines the Ruby classes that can be (de)serialized to/from JSON.
This is useful for accessing the REST endpoints that are published by this application, but only This is useful for accessing the REST endpoints that are published by this application, but only
......
topology_template:
node_templates:
OutputTranscoder:
artifacts:
inputdistributor2_image:
type: "tosca.artifacts.Deployment.Image.Container.Docker"
repository: SWITCH_docker_hub
file: "mogpsantos/outputtranscoder"
requirements:
dependency:
- MOGFrontend
type: "Switch.nodes.Application.Container.Docker.LOKSORR_OutputTranscoder"
properties:
ports_mapping:
- "4000:4000"
scaling_mode: single
MOGFrontend:
artifacts:
inputdistributor_image:
type: "tosca.artifacts.Deployment.Image.Container.Docker"
repository: SWITCH_docker_hub
file: "mogpsantos/switchgui"
requirements:
type: "Switch.nodes.Application.Container.Docker.LOKSORR_MOGFrontend"
properties:
ports_mapping: "5050:80"
scaling_mode: single
network_templates:
volume_templates:
artifact_types:
"tosca.artifacts.Deployment.Image.Container.Docker":
derived_from: "tosca.artifacts.Deployment.Image"
description: Blabla
node_types:
"Switch.nodes.Application.Container.Docker.PEDRO.SANTOS_ProxyTranscoder":
properties:
multicastAddrPort:
default: 3000
type: "Switch.datatypes.port"
multicastAddrIP:
default: "225.2.2.0"
type: "Switch.datatypes.Network.Multicast"
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.VLAD_THE_IMPALER_RTUSensorDataAcquisition":
properties:
name:
required: false
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.PEDRO.SANTOS_InputDistributor":
properties:
inPort:
default: 2000
type: "Switch.datatypes.port"
derived_from: "Switch.nodes.Application.Container.Docker"
"tosca.groups.Root":
"Switch.nodes.Application.Container.Docker.PEDRO.SANTOS_InputDistributor.Cardiff":
properties:
multicastAddrPort:
default: 3000
type: "Switch.datatypes.port"
multicastAddrIP:
default: "225.2.2.0"
type: "Switch.datatypes.Network.Multicast"
inPort:
default: 2000
type: "Switch.datatypes.port"
waitingTime:
default: 5
type: integer
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_Tm16":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_Tm12":
artifacts:
tm12_image:
type: "tosca.artifacts.Deployment.Image.Container.Docker"
repository: SWITCH_docker_hub
file: "matej/tm12"
properties:
12:
default: 12
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.BEIA_Gateway":
properties:
Name:
default: BEIA_Gateway
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.BEIA_Acquisition":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.PEDRO.SANTOS_Switcher.Cardiff":
properties:
waitingTime:
default: 5
type: integer
multicastAddrIP:
default: "225.2.2.0"
type: "Switch.datatypes.Network.Multicast"
switcherREST:
default: switcherREST
type: "Switch.datatypes.port"
switcherOutAddrPort:
default: 6000
type: "Switch.datatypes.port"
multicastAddrIP2:
default: "225.2.2.2"
type: "Switch.datatypes.Network.Multicast"
switcherOutAddrIP:
default: "226.2.2.2"
type: "Switch.datatypes.Network.Multicast"
multicastAddrPort:
default: 3000
type: "Switch.datatypes.port"
videoWidth:
default: 176
type: integer
multicastAddrPort2:
default: 3002
type: "Switch.datatypes.port"
videoHeight:
default: 100
type: integer
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_Monitoring_Adapter":
properties:
monitoring_server:
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker":
properties:
in_ports:
entry_schema:
type: "Switch.datatypes.port"
required: false
type: map
dockers:
required: false
type: string
QoS:
required: false
type: "Switch.datatypes.QoS.AppComponent"
docker_id:
default: id
type: string
out_ports:
entry_schema:
type: "Switch.datatypes.port"
required: false
type: map
scaling_mode:
required: false
type: string
ethernet_port:
entry_schema:
type: "Switch.datatypes.ethernet_port"
required: false
type: list
name:
required: false
type: string
derived_from: "tosca.nodes.Container.Application"
"Switch.nodes.Application.Container.Docker.LOKSORR_MC23":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.BEIA_RTUSensorDataAcquisition":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Compute":
artifacts:
gateway_image:
type: "tosca.artifacts.Deployment.Image.Container.Docker"
repository: SWITCH_docker_hub
file: "/???"
derived_from: "tosca.nodes.Compute"
"Switch.nodes.Application.Container.Docker.BEIA_V1_NotificationServer":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_OutputTranscoder":
properties:
metrics:
default: true
type: string
OutIP:
type: string
multicastAddrIP:
type: string
multicastAddrPortmulticastAddrIP:
type: string
statsdPort:
type: string
OutPort:
type: string
videoWidth:
type: string
videoHeight:
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_ProxyTranscoder2":
artifacts:
monitoring_adapter_v2_image:
type: "tosca.artifacts.Deployment.Image.Container.Docker"
repository: SWITCH_docker_hub
file: "beia/monitoring_adapter"
properties:
metrics:
type: string
machineip:
type: string
multicastAddrIP:
type: string
statsdPort:
type: string
multicastAddrPort:
type: string
videoWidth:
type: string
videoHeight:
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"switch.Component.Component.Docker":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Network":
derived_from: "tosca.nodes.network.Network"
"Switch.nodes.Application.Container.Docker.LOKSORR_Mc19":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_Dasdass":
artifacts:
dasdass_image:
type: "tosca.artifacts.Deployment.Image.Container.Docker"
repository: SWITCH_docker_hub
file: adasads
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.EventListener":
derived_from: "tosca.nodes.Root"
"Switch.nodes.Application.Container.Docker.LOKSORR_VideoSwitcher":
properties:
waitingTime:
type: string
switcherREST:
type: string
switcherOutAddrPort:
type: string
buffer:
type: string
switcherOutAddrIP:
type: string
camnumber:
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.BEIA_V1_DatabaseServer":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.VirtualNetwork":
artifacts:
"switcher.cardiff_image":
type: "tosca.artifacts.Deployment.Image.Container.Docker"
repository: SWITCH_docker_hub
file: null
properties:
subnet:
default: "192.168.10.0"
type: string
netmask:
default: "255.255.255.0"
type: string
name:
type: string
derived_from: "tosca.nodes.Root"
"Switch.nodes.Application.Container.Docker.BEIA_RTUSensorDataManagement":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.MonitoringAgent":
properties:
agent_id:
default: null
type: string
probes:
entry_schema:
type: "Switch.datatypes.monitoring.probe"
type: map
derived_from: "tosca.nodes.Root"
"Switch.nodes.Application.Container.Docker.LOKSORR_TestingTwo":
artifacts:
testingtwo_image:
type: "tosca.artifacts.Deployment.Image.Container.Docker"
repository: SWITCH_docker_hub
file: "testing/two"
properties:
q:
default: q
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_TM11":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.AdaptationPolicy":
derived_from: "tosca.nodes.Root"
"Switch.nodes.Application.Container.Docker.LOKSORR_TM5":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_Mc18":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_TM14":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.PEDRO.SANTOS_Input":
properties:
port2:
default: 24
type: integer
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.BEIA_V1_Monitoring":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_TM10":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.PEDRO.SANTOS.MOG_Input_Distributor":
properties:
Input_RTP_TS_Port:
default: 2000
type: string
Waiting_Time:
default: 5
type: string
Output_Uncompressed_Video_Multicast_Address:
default: "225.2.2.0"
type: string
Output_Uncompressed_Video_Multicast_Port:
default: "3000 waiting time Waiting time (in seconds) for Input Distributor to receive TS stream 5 switcherOutAddrIP Multicast IP address where Video Switcher"
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Component":
derived_from: "tosca.nodes.Root"
"Switch.nodes.Constraint":
derived_from: "tosca.nodes.Root"
"Switch.nodes.Application.Container.Docker.PEDRO.SANTOS.MOG_Switcher":
properties:
Input_A_Uncompressed_Video_Multicast_Address:
default: "225.2.2.0"
type: string
Input_Video_Width:
default: 176
type: string
Input_B_Uncompressed_Video_Multicast_Port:
default: 3002
type: string
Input_B_Uncompressed_Video_Multicast_Address:
default: "225.2.2.1"
type: string
port:
default: 23
type: integer
Output_Uncompressed_Video_Multicast_Address:
default: "226.2.2.2"
type: string
Output_REST_PORT:
default: 8008
type: string
Output_Uncompressed_Video_Multicast_Port:
default: 6000
type: string
Input_Video_Height:
default: 100
type: string
Input_A_Uncompressed_Video_Multicast_Port:
default: 3000
type: string
Waiting_Time:
default: 5
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_TM13":
artifacts:
mc19_image:
type: "tosca.artifacts.Deployment.Image.Container.Docker"
repository: SWITCH_docker_hub
file: "matej/19"
properties:
q:
default: a
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.BEIA_V1_RTUSensorDataAcquisitions":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_TemMagic4":
artifacts:
tm11_image:
type: "tosca.artifacts.Deployment.Image.Container.Docker"
repository: SWITCH_docker_hub
file: "matej/tm11"
properties:
metrics:
default: bb
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.BEIA_RTUSensorData":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.PEDRO.SANTOS_OutputTranscoder.Cardiff":
properties:
OutIP:
default: "192.168.1.194"
type: "Switch.datatypes.Network.Multicast"
videoWidth:
default: 176
type: integer
OutPort:
default: 4000
type: "Switch.datatypes.port"
videoHeight:
default: 100
type: integer
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.MOG_InputDistributor":
artifacts:
inputdistributor2_image:
type: "tosca.artifacts.Deployment.Image.Container.Docker"
repository: SWITCH_docker_hub
file: "mogpsantos/inputpipe"
properties:
metrics:
default: true
type: boolean
waitingTime:
default: 5
type: integer
machineip:
default: InputDistributor
type: string
multicastAddrIP:
type: string
statsdPort:
default: 8125
type: "Switch.datatypes.port"
multicastAddrPort:
default: 3000
type: "Switch.datatypes.port"
videoWidth:
default: 720
type: integer
inPort:
default: 2000
type: "Switch.datatypes.port"
videoHeight:
default: 406
type: integer
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Connection":
properties:
source:
type: "Switch.datatypes.Application.Connection.EndPoint"
bandwidth:
type: integer
multicast:
type: "Switch.datatypes.Network.Multicast"
jitter:
required: false
type: integer
target:
type: "Switch.datatypes.Application.Connection.EndPoint"
latency:
required: false
type: integer
QoS:
type: "Switch.datatypes.QoS.AppComponent"
derived_from: "tosca.nodes.Root"
"Switch.nodes.Application.Container.Docker.LOKSORR_TM6":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Requirement":
derived_from: "tosca.nodes.Root"
"Switch.nodes.ExternalComponent":
derived_from: "tosca.nodes.Root"
"Switch.nodes.DST":
properties:
dave:
type: string
derived_from: "tosca.nodes.Root"
"Switch.nodes.Application.Container.Docker.LOKSORR_Bb":
properties:
dsa:
default: das
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.BEIA_NotificationServer":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.BEIA_V1_RTUSensorDataAcquisition":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_Tm15":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_TM7":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.MonitoringServer":
properties:
ports_mapping:
entry_schema:
type: "Switch.datatypes.port_mapping"
type: map
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_TM1":
artifacts:
tm3_image:
type: "tosca.artifacts.Deployment.Image.Container.Docker"
repository: SWITCH_docker_hub
file: "matej/tm3"
properties:
variable1:
default: variable1
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_TM3":
properties:
v3:
default: v3
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.UL_JitsiMeet_docker":
properties:
ips:
type: string
deploy:
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_SomethingStupid":
artifacts:
somethingstupid_image:
type: "tosca.artifacts.Deployment.Image.Container.Docker"
repository: SWITCH_docker_hub
file: "something/stupid"
properties:
stupidvalue:
default: "You are stupid"
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_MC22":
artifacts:
mc23_image:
type: "tosca.artifacts.Deployment.Image.Container.Docker"
repository: SWITCH_docker_hub
file: "mc/23"
properties:
m:
default: m2
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_TM2":
properties:
var2:
default: var2
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_TM9":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_TOSCAMagic":
artifacts:
toscamagic_image:
type: "tosca.artifacts.Deployment.Image.Container.Docker"
repository: SWITCH_docker_hub
file: "something/different"
properties:
qq:
default: qq
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_MOGFrontend":
properties:
ipPT1:
type: string
ipPT2:
type: string
ipPT3:
type: string
ipPT4:
type: string
ipVS:
type: string
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.BEIA_V1_TelemetryGateway":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_Mc21":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.Application.Container.Docker.LOKSORR_BlablaM17":
derived_from: "Switch.nodes.Application.Container.Docker"
"Switch.nodes.variable":
properties:
multicastAddrPort:
default: 3000
type: integer
multicastAddrIP:
default: "255.2.2.0"
type: string
videoWidth:
default: 170
type: integer
videoHeight:
default: 100
type: integer
derived_from: "tosca.nodes.Root"
"Switch.nodes.MessagePasser":
derived_from: "tosca.nodes.Root"
"Switch.nodes.Application.Container.Docker.BEIA_DB":
derived_from: "Switch.nodes.Application.Container.Docker"
repositories:
SWITCH_docker_hub:
url: "https://github.com/switch-project"
credential:
token_type: "X-Auth-Token"
token: 604bbe45ac7143a79e14f3158df67091
protocol: xauth
description: "switch repository in GitHub"
data_types:
"Switch.datatypes.monitoring.metric.threshold":
properties:
operator:
type: string
value:
type: integer
derived_from: "tosca.datatypes.Root"
"Switch.datatypes.port":
properties:
type:
type: string
port:
type: string
derived_from: "tosca.datatypes.Root"
"Switch.datatypes.Application.Connection.EndPoint":
properties:
netmask:
type: string
component_name:
type: string
port_name:
type: string
address:
type: string
derived_from: "tosca.datatypes.Root"
"Switch.datatypes.monitoring.probe":
properties:
active:
type: boolean
path:
required: false
type: string
static:
type: boolean
name:
type: string
metrics:
entry_schema:
type: "Switch.datatypes.monitoring.metric"
type: map
derived_from: "tosca.datatypes.Root"
"Switch.datatypes.hw.host":
properties:
cpu_frequency:
type: float
mem_size:
type: integer
num_cpus:
type: integer
disk_size:
type: integer
derived_from: "tosca.datatypes.Root"
"Switch.datatypes.ethernet_port":
properties:
subnet_name:
type: string
name:
type: string
address:
type: string
derived_from: "tosca.datatypes.Root"
"Switch.datatypes.hw.os":
properties:
os_version:
type: string
distribution:
type: string
type:
type: string
architecture:
type: string
derived_from: "tosca.datatypes.Root"
"Switch.datatypes.QoS.AppComponent":
properties:
response_time:
type: integer
derived_from: "tosca.datatypes.Root"
"Switch.datatypes.Application.Connection.Multicast":
properties:
multicastAddrPort:
type: string
multicastAddrIP:
type: string
derived_from: "tosca.datatypes.Root"
"Switch.datatypes.Network.Multicast":
properties:
multicastAddrPort:
type: string
multicastAddrIP:
type: string
derived_from: "tosca.datatypes.Root"
"Switch.datatypes.port_mapping":
properties:
host_port:
type: integer
container_port:
type: integer
derived_from: "tosca.datatypes.Root"
"Switch.datatypes.monitoring.metric":
properties:
thresholds:
entry_schema:
type: "Switch.datatypes.monitoring.metric.threshold"
required: false
type: map
type:
type: string
name:
type: string
unit:
required: false
type: string
derived_from: "tosca.datatypes.Root"
tosca_definitions_version: tosca_simple_yaml_1_0
...@@ -56,6 +56,7 @@ if not getattr(logger, 'handler_set', None): ...@@ -56,6 +56,7 @@ if not getattr(logger, 'handler_set', None):
retry=0 retry=0
retry_count = 20 retry_count = 20
tasks_done = {}
#cwd = os.getcwd() #cwd = os.getcwd()
falied_playbook_path='/tmp/falied_playbook.yml' falied_playbook_path='/tmp/falied_playbook.yml'
...@@ -65,7 +66,7 @@ def install_prerequisites(vm,return_dict): ...@@ -65,7 +66,7 @@ def install_prerequisites(vm,return_dict):
logger.info("Installing ansible prerequisites on: "+vm.ip) logger.info("Installing ansible prerequisites on: "+vm.ip)
ssh = paramiko.SSHClient() ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(vm.ip, username=vm.user, key_filename=vm.key) ssh.connect(vm.ip, username=vm.user, key_filename=vm.key,timeout=5)
sftp = ssh.open_sftp() sftp = ssh.open_sftp()
file_path = os.path.dirname(os.path.abspath(__file__)) file_path = os.path.dirname(os.path.abspath(__file__))
sftp.chdir('/tmp/') sftp.chdir('/tmp/')
...@@ -126,26 +127,29 @@ def create_faied_playbooks(failed_tasks,playbook_path): ...@@ -126,26 +127,29 @@ def create_faied_playbooks(failed_tasks,playbook_path):
hosts +=host+"," hosts +=host+","
if task_name == 'setup': if task_name == 'setup':
found_first_failed_task = True found_first_failed_task = True
else: else:
found_first_failed_task = False found_first_failed_task = False
logger.info("First faield task: \'"+task_name+ "\'. Host: "+ host)
for play in plays: for play in plays:
for task in play['tasks']: for task in play['tasks']:
if found_first_failed_task: if found_first_failed_task:
retry_task.append(task) retry_task.append(task)
else: else:
if task_name in tasks:
host_done = tasks_done[task_name]
if host_done == host or host_done == 'all':
logger.info("Task: \'"+task_name+ "\'. on host: "+ host+ " already done. Skipping" )
continue
if 'name' in task and task['name'] == task_name: if 'name' in task and task['name'] == task_name:
retry_task.append(task) retry_task.append(task)
logger.info("First faield task: \'"+task_name+ "\'. Host: "+ host)
found_first_failed_task = True found_first_failed_task = True
elif task_name in task: elif task_name in task:
retry_task.append(task) retry_task.append(task)
logger.info("First faield task: \'"+task_name+ "\'. Host: "+ host)
found_first_failed_task = True found_first_failed_task = True
...@@ -200,6 +204,8 @@ def execute_playbook(hosts, playbook_path,user,ssh_key_file,extra_vars,passwords ...@@ -200,6 +204,8 @@ def execute_playbook(hosts, playbook_path,user,ssh_key_file,extra_vars,passwords
#failed_tasks.append(res) #failed_tasks.append(res)
resp = json.dumps({"host":res['ip'], "result":res['result']._result,"task":res['task']}) resp = json.dumps({"host":res['ip'], "result":res['result']._result,"task":res['task']})
logger.info("Task: "+res['task'] + ". Host: "+ res['ip'] +". State: ok") logger.info("Task: "+res['task'] + ". Host: "+ res['ip'] +". State: ok")
global tasks_done
tasks_done[res['task']]= res['ip']
answer.append({"host":res['ip'], "result":res['result']._result,"task":res['task']}) answer.append({"host":res['ip'], "result":res['result']._result,"task":res['task']})
...@@ -215,6 +221,7 @@ def execute_playbook(hosts, playbook_path,user,ssh_key_file,extra_vars,passwords ...@@ -215,6 +221,7 @@ def execute_playbook(hosts, playbook_path,user,ssh_key_file,extra_vars,passwords
#failed_tasks.append(res['result']) #failed_tasks.append(res['result'])
resp = json.dumps({"host":res['ip'], "result":res['result']._result, "task":res['task']}) resp = json.dumps({"host":res['ip'], "result":res['result']._result, "task":res['task']})
logger.error("Task: "+res['task'] + ". Host: "+ res['ip'] +". State: host_failed") logger.error("Task: "+res['task'] + ". Host: "+ res['ip'] +". State: host_failed")
logger.error(resp)
answer.append({"host":res['ip'], "result":res['result']._result,"task":res['task']}) answer.append({"host":res['ip'], "result":res['result']._result,"task":res['task']})
return answer,failed_tasks return answer,failed_tasks
...@@ -225,11 +232,14 @@ def run(vm_list,playbook_path,rabbitmq_host,owner): ...@@ -225,11 +232,14 @@ def run(vm_list,playbook_path,rabbitmq_host,owner):
hosts="" hosts=""
ssh_key_file="" ssh_key_file=""
rabbit = DRIPLoggingHandler(host=rabbitmq_host, port=5672,user=owner) rabbit = DRIPLoggingHandler(host=rabbitmq_host, port=5672,user=owner)
logger.addHandler(rabbit) logger.addHandler(rabbit)
logger.info("DRIPLogging host: \'"+str(rabbitmq_host)+ "\'"+" logging message owner: \'"+owner+"\'")
manager = multiprocessing.Manager() manager = multiprocessing.Manager()
return_dict = manager.dict() return_dict = manager.dict()
jobs = [] jobs = []
if os.path.exists(falied_playbook_path): if os.path.exists(falied_playbook_path):
os.remove(falied_playbook_path) os.remove(falied_playbook_path)
...@@ -251,7 +261,6 @@ def run(vm_list,playbook_path,rabbitmq_host,owner): ...@@ -251,7 +261,6 @@ def run(vm_list,playbook_path,rabbitmq_host,owner):
passwords = {} passwords = {}
logger.info("Executing playbook: " + (playbook_path)) logger.info("Executing playbook: " + (playbook_path))
answer,failed_tasks = execute_playbook(hosts,playbook_path,user,ssh_key_file,extra_vars,passwords) answer,failed_tasks = execute_playbook(hosts,playbook_path,user,ssh_key_file,extra_vars,passwords)
failed_playsbooks = [] failed_playsbooks = []
...@@ -264,15 +273,20 @@ def run(vm_list,playbook_path,rabbitmq_host,owner): ...@@ -264,15 +273,20 @@ def run(vm_list,playbook_path,rabbitmq_host,owner):
task_name = str(failed_task._task.get_name()) task_name = str(failed_task._task.get_name())
retry_setup = 0 retry_setup = 0
while task_name == 'setup' and retry_setup < retry_count : while task_name and task_name == 'setup' and retry_setup < retry_count :
retry_setup+=1 retry_setup+=1
answer,failed_tasks = execute_playbook(hosts,playbook_path,user,ssh_key_file,extra_vars,passwords) answer,failed_tasks = execute_playbook(hosts,playbook_path,user,ssh_key_file,extra_vars,passwords)
failed = failed_tasks[0] if failed_tasks:
failed_task = failed['task'] failed = failed_tasks[0]
if isinstance(failed_task, ansible.parsing.yaml.objects.AnsibleUnicode) or isinstance(failed_task, unicode) or isinstance(failed_task,str): failed_task = failed['task']
task_name = str(failed_task)
if isinstance(failed_task, ansible.parsing.yaml.objects.AnsibleUnicode) or isinstance(failed_task, unicode) or isinstance(failed_task,str):
task_name = str(failed_task)
else:
task_name = str(failed_task._task.get_name())
else: else:
task_name = str(failed_task._task.get_name()) task_name = None
while not failed_playsbooks: while not failed_playsbooks:
failed_playsbooks = create_faied_playbooks(failed_tasks,playbook_path) failed_playsbooks = create_faied_playbooks(failed_tasks,playbook_path)
...@@ -280,15 +294,22 @@ def run(vm_list,playbook_path,rabbitmq_host,owner): ...@@ -280,15 +294,22 @@ def run(vm_list,playbook_path,rabbitmq_host,owner):
for failed_playbook in failed_playsbooks: for failed_playbook in failed_playsbooks:
hosts = failed_playbook[0]['hosts'] hosts = failed_playbook[0]['hosts']
logger.info("Writing new playbook at : \'"+falied_playbook_path+ "\'")
with open(falied_playbook_path, 'w') as outfile: with open(falied_playbook_path, 'w') as outfile:
yaml.dump(failed_playbook, outfile) yaml.dump(failed_playbook, outfile)
retry_failed_tasks = 0 retry_failed_tasks = 0
while retry_failed_tasks < retry_count and failed_tasks: failed_tasks = None
done = False
while not done:
logger.info("Executing playbook : " + (falied_playbook_path) +" in host: "+hosts+" Retries: "+str(retry_failed_tasks)) logger.info("Executing playbook : " + (falied_playbook_path) +" in host: "+hosts+" Retries: "+str(retry_failed_tasks))
answer,failed_tasks = execute_playbook(hosts,falied_playbook_path,user,ssh_key_file,extra_vars,passwords) answer,failed_tasks = execute_playbook(hosts,falied_playbook_path,user,ssh_key_file,extra_vars,passwords)
retry_failed_tasks+=1 retry_failed_tasks+=1
retry_failed_tasks = 0 if retry_failed_tasks > retry_count or not failed_tasks:
retry_failed_tasks = 0
done = True
break
if os.path.exists(falied_playbook_path): if os.path.exists(falied_playbook_path):
os.remove(falied_playbook_path) os.remove(falied_playbook_path)
......
...@@ -47,15 +47,15 @@ def get_resp_line(line): ...@@ -47,15 +47,15 @@ def get_resp_line(line):
def docker_check(vm, compose_name): def docker_check(vm, compose_name):
try: try:
logger.info("Starting docker info services on: "+vm.ip) logger.info("Starting docker info services on: "+vm.ip)
paramiko.util.log_to_file("deployment.log") paramiko.util.log_to_file("deployment.log")
ssh = paramiko.SSHClient() ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(vm.ip, username=vm.user, key_filename=vm.key) ssh.connect(vm.ip, username=vm.user, key_filename=vm.key, timeout=5)
node_format = '\'{\"ID\":\"{{.ID}}\",\"hostname\":\"{{.Hostname}}\",\"status\":\"{{.Status}}\",\"availability\":\"{{.Availability}}\",\"status\":\"{{.Status}}\"}\'' node_format = '\'{\"ID\":\"{{.ID}}\",\"hostname\":\"{{.Hostname}}\",\"status\":\"{{.Status}}\",\"availability\":\"{{.Availability}}\",\"status\":\"{{.Status}}\"}\''
cmd = 'sudo docker node ls --format ' + (node_format) cmd = 'sudo docker node ls --format ' + (node_format)
logger.info("Sending :"+cmd)
json_response = {} json_response = {}
cluster_node_info = [] cluster_node_info = []
stdin, stdout, stderr = ssh.exec_command(cmd) stdin, stdout, stderr = ssh.exec_command(cmd)
...@@ -73,7 +73,7 @@ def docker_check(vm, compose_name): ...@@ -73,7 +73,7 @@ def docker_check(vm, compose_name):
services_format = '\'{\"ID\":\"{{.ID}}\",\"name\":\"{{.Name}}\",\"image\":\"{{.Image}}\",\"node\":\"{{.Node}}\",\"desired_state\":\"{{.DesiredState}}\",\"current_state\":\"{{.CurrentState}}\",\"error\":\"{{.Error}}\",\"ports\":\"{{.Ports}}\"}\'' services_format = '\'{\"ID\":\"{{.ID}}\",\"name\":\"{{.Name}}\",\"image\":\"{{.Image}}\",\"node\":\"{{.Node}}\",\"desired_state\":\"{{.DesiredState}}\",\"current_state\":\"{{.CurrentState}}\",\"error\":\"{{.Error}}\",\"ports\":\"{{.Ports}}\"}\''
cmd = 'sudo docker stack ps '+ compose_name +' --format ' + services_format cmd = 'sudo docker stack ps '+ compose_name +' --format ' + services_format
logger.info("Got response running \"docker stack ps\"") logger.info("Sending :"+cmd)
stdin, stdout, stderr = ssh.exec_command(cmd) stdin, stdout, stderr = ssh.exec_command(cmd)
stack_ps_resp = stdout.readlines() stack_ps_resp = stdout.readlines()
services_info = [] services_info = []
...@@ -99,6 +99,7 @@ def docker_check(vm, compose_name): ...@@ -99,6 +99,7 @@ def docker_check(vm, compose_name):
stack_format = '\'{"ID":"{{.ID}}","name":"{{.Name}}","mode":"{{.Mode}}","replicas":"{{.Replicas}}","image":"{{.Image}}"}\'' stack_format = '\'{"ID":"{{.ID}}","name":"{{.Name}}","mode":"{{.Mode}}","replicas":"{{.Replicas}}","image":"{{.Image}}"}\''
cmd = 'sudo docker stack services '+ compose_name +' --format ' + (stack_format) cmd = 'sudo docker stack services '+ compose_name +' --format ' + (stack_format)
logger.info("Sending :"+cmd)
stdin, stdout, stderr = ssh.exec_command(cmd) stdin, stdout, stderr = ssh.exec_command(cmd)
logger.info("Got response running \"docker stack services\"") logger.info("Got response running \"docker stack services\"")
stack_resp = stdout.readlines() stack_resp = stdout.readlines()
...@@ -115,9 +116,9 @@ def docker_check(vm, compose_name): ...@@ -115,9 +116,9 @@ def docker_check(vm, compose_name):
cmd = 'sudo docker node inspect ' cmd = 'sudo docker node inspect '
for hostname in nodes_hostname: for hostname in nodes_hostname:
cmd += ' '+hostname cmd += ' '+hostname
logger.info("Sending :"+cmd)
stdin, stdout, stderr = ssh.exec_command(cmd) stdin, stdout, stderr = ssh.exec_command(cmd)
logger.info("Got response running \"docker node inspect\"")
inspect_resp = stdout.readlines() inspect_resp = stdout.readlines()
response_str = "" response_str = ""
...@@ -134,7 +135,8 @@ def docker_check(vm, compose_name): ...@@ -134,7 +135,8 @@ def docker_check(vm, compose_name):
cmd = 'sudo docker inspect ' cmd = 'sudo docker inspect '
for id in services_ids: for id in services_ids:
cmd += ' '+id cmd += ' '+id
logger.info("Sending :"+cmd)
stdin, stdout, stderr = ssh.exec_command(cmd) stdin, stdout, stderr = ssh.exec_command(cmd)
logger.info("Got response running \"docker inspect\"") logger.info("Got response running \"docker inspect\"")
inspect_resp = stdout.readlines() inspect_resp = stdout.readlines()
......
...@@ -41,7 +41,7 @@ def deploy_compose(vm, compose_file, compose_name,docker_login): ...@@ -41,7 +41,7 @@ def deploy_compose(vm, compose_file, compose_name,docker_login):
paramiko.util.log_to_file("deployment.log") paramiko.util.log_to_file("deployment.log")
ssh = paramiko.SSHClient() ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(vm.ip, username=vm.user, key_filename=vm.key) ssh.connect(vm.ip, username=vm.user, key_filename=vm.key, timeout=5)
sftp = ssh.open_sftp() sftp = ssh.open_sftp()
sftp.chdir('/tmp/') sftp.chdir('/tmp/')
sftp.put(compose_file, "docker-compose.yml") sftp.put(compose_file, "docker-compose.yml")
...@@ -55,11 +55,14 @@ def deploy_compose(vm, compose_file, compose_name,docker_login): ...@@ -55,11 +55,14 @@ def deploy_compose(vm, compose_file, compose_name,docker_login):
else: else:
#stdin, stdout, stderr = ssh.exec_command("sudo docker stack rm %s" % (compose_name)) #stdin, stdout, stderr = ssh.exec_command("sudo docker stack rm %s" % (compose_name))
#stdout.read() #stdout.read()
#err = stderr.read() #err = stderr.read()
stdin, stdout, stderr = ssh.exec_command("sudo docker stack deploy --with-registry-auth --compose-file /tmp/docker-compose.yml %s" % (compose_name)) cmd = "sudo docker stack deploy --with-registry-auth --compose-file /tmp/docker-compose.yml "+compose_name
stdout.read() logger.info("Sendding : "+cmd)
stdin, stdout, stderr = ssh.exec_command(cmd)
out = stdout.read()
err = stderr.read() err = stderr.read()
logger.info("stderr from: "+vm.ip + " "+ err) logger.info("stderr from: "+vm.ip + " "+ err)
logger.info("stdout from: "+vm.ip + " "+ out)
logger.info("Finished docker compose deployment on: "+vm.ip) logger.info("Finished docker compose deployment on: "+vm.ip)
except Exception as e: except Exception as e:
global retry global retry
......
...@@ -41,7 +41,7 @@ def install_engine(vm,return_dict): ...@@ -41,7 +41,7 @@ def install_engine(vm,return_dict):
paramiko.util.log_to_file("deployment.log") paramiko.util.log_to_file("deployment.log")
ssh = paramiko.SSHClient() ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(vm.ip, username=vm.user, key_filename=vm.key) ssh.connect(vm.ip, username=vm.user, key_filename=vm.key , timeout=5)
stdin, stdout, stderr = ssh.exec_command("sudo dpkg --get-selections | grep docker") stdin, stdout, stderr = ssh.exec_command("sudo dpkg --get-selections | grep docker")
temp_list = stdout.readlines() temp_list = stdout.readlines()
temp_str = "" temp_str = ""
......
...@@ -43,7 +43,7 @@ def install_manager(vm): ...@@ -43,7 +43,7 @@ def install_manager(vm):
paramiko.util.log_to_file("deployment.log") paramiko.util.log_to_file("deployment.log")
ssh = paramiko.SSHClient() ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(vm.ip, username=vm.user, key_filename=vm.key) ssh.connect(vm.ip, username=vm.user, key_filename=vm.key,timeout=5)
stdin, stdout, stderr = ssh.exec_command("sudo docker info | grep 'Swarm'") stdin, stdout, stderr = ssh.exec_command("sudo docker info | grep 'Swarm'")
temp_list1 = stdout.readlines() temp_list1 = stdout.readlines()
stdin, stdout, stderr = ssh.exec_command("sudo docker info | grep 'Is Manager'") stdin, stdout, stderr = ssh.exec_command("sudo docker info | grep 'Is Manager'")
...@@ -79,7 +79,7 @@ def install_worker(join_cmd, vm,return_dict): ...@@ -79,7 +79,7 @@ def install_worker(join_cmd, vm,return_dict):
paramiko.util.log_to_file("deployment.log") paramiko.util.log_to_file("deployment.log")
ssh = paramiko.SSHClient() ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(vm.ip, username=vm.user, key_filename=vm.key) ssh.connect(vm.ip, username=vm.user, key_filename=vm.key,timeout=5)
stdin, stdout, stderr = ssh.exec_command("sudo docker info | grep 'Swarm'") stdin, stdout, stderr = ssh.exec_command("sudo docker info | grep 'Swarm'")
temp_list1 = stdout.readlines() temp_list1 = stdout.readlines()
if temp_list1[0].find("Swarm: active") != -1: if temp_list1[0].find("Swarm: active") != -1:
......
...@@ -41,10 +41,10 @@ class DRIPLoggingHandler(RabbitMQHandler): ...@@ -41,10 +41,10 @@ class DRIPLoggingHandler(RabbitMQHandler):
if not self.connection or self.connection.is_closed or not self.channel or self.channel.is_closed: if not self.connection or self.connection.is_closed or not self.channel or self.channel.is_closed:
self.open_connection() self.open_connection()
queue='log_qeue_' + self.user
self.channel.basic_publish( self.channel.basic_publish(
exchange='', exchange='',
routing_key='log_qeue_user', routing_key=queue,
body=self.format(record), body=self.format(record),
properties=pika.BasicProperties( properties=pika.BasicProperties(
delivery_mode=2) delivery_mode=2)
......
---
publicKeyPath: "name@id_rsa.pub"
userName: "vm_user"
topologies:
- topology: "egi-level-1"
cloudProvider: "EGI"
domain: "CESNET"
status: "fresh"
tag: "fixed"
statusInfo: null
copyOf: null
sshKeyPairId: null
connections: null
---
subnets: null
components:
- name: "nodeA"
type: "switch/compute"
nodeType: "medium"
OStype: "Ubuntu 16.04"
script: null
role: master
dockers: null
publicAddress: null
ethernetPort: null
VMResourceID: null
...@@ -25,13 +25,30 @@ from os.path import expanduser ...@@ -25,13 +25,30 @@ from os.path import expanduser
home = expanduser("~") home = expanduser("~")
playbook_path=home+"/Downloads/playbook.yml" playbook_path=home+"/Downloads/playbook.yml"
ip = "147.228.242.81" playbook_path=sys.argv[1] #home+"/Downloads/playbook.yml"
ip = "147.228.242.97"
ip = sys.argv[2] #"147.228.242.97"
user="vm_user" user="vm_user"
role = "master" role = "master"
ssh_key_file=home+"/Downloads/id_rsa" ssh_key_file=home+"/Downloads/id_rsa"
ssh_key_file = sys.argv[3] #home+"/Downloads/id_rsa"
vm_list = set() vm_list = set()
vm = VmInfo(ip, user, ssh_key_file, role) vm = VmInfo(ip, user, ssh_key_file, role)
vm_list.add(vm) vm_list.add(vm)
ret = ansible_playbook.run(vm_list,playbook_path,"localhost","owner")
\ No newline at end of file rabbit_mq_host = sys.argv[4] #rabbit_mq_host
print sys.argv
print "playbook_path: "+playbook_path
print "ip: "+ip
print "ssh_key_file: "+ssh_key_file
print "rabbit_mq_host: "+rabbit_mq_host
ret = ansible_playbook.run(vm_list,playbook_path,rabbit_mq_host,"owner")
\ No newline at end of file
-----BEGIN CERTIFICATE-----\nMIILUjCCCjqgAwIBAgIEUtiPbTANBgkqhkiG9w0BAQsFADCBmzETMBEGCgmSJomT\n8ixkARkWA29yZzEWMBQGCgmSJomT8ixkARkWBnRlcmVuYTETMBEGCgmSJomT8ixk\nARkWA3RjczELMAkGA1UEBhMCTkwxIzAhBgNVBAoTGlVuaXZlcnNpdGVpdCB2YW4g\nQW1zdGVyZGFtMSUwIwYDVQQDDBxTLiBLb3Vsb3V6aXMgc2tvdWxvdTFAdXZhLm5s\nMB4XDTE3MTIyMDE3MDUzOVoXDTE3MTIyMTA1MTAzOVowgbAxEzARBgoJkiaJk/Is\nZAEZFgNvcmcxFjAUBgoJkiaJk/IsZAEZFgZ0ZXJlbmExEzARBgoJkiaJk/IsZAEZ\nFgN0Y3MxCzAJBgNVBAYTAk5MMSMwIQYDVQQKExpVbml2ZXJzaXRlaXQgdmFuIEFt\nc3RlcmRhbTElMCMGA1UEAwwcUy4gS291bG91emlzIHNrb3Vsb3UxQHV2YS5ubDET\nMBEGA1UEAxMKMTM4OTkyNDIwNTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA\nzQL++YyA43yvhsgWhFW2tphy1LD1gH7IYGgKDz3EmK1SPusYE2VUj10r+JEGamp6\nPvbR6yE2G5Ej9cLHj7/lsDWta1q4pOtYBbVmtWDW34uyngvQd6DDZweJ8usaJ5bS\noVBOQQDuF3bWc21jjLWl/RrX7TlgkgpN2FIl213d/PcCAwEAAaOCCAkwgggFMIIH\nowYKKwYBBAG+RWRkBQSCB5MwggePMIIHizCCB4cwggZvAgEBMIG5oIG2MIGhpIGe\nMIGbMRMwEQYKCZImiZPyLGQBGRYDb3JnMRYwFAYKCZImiZPyLGQBGRYGdGVyZW5h\nMRMwEQYKCZImiZPyLGQBGRYDdGNzMQswCQYDVQQGEwJOTDEjMCEGA1UEChMaVW5p\ndmVyc2l0ZWl0IHZhbiBBbXN0ZXJkYW0xJTAjBgNVBAMMHFMuIEtvdWxvdXppcyBz\na291bG91MUB1dmEubmwCEAr1c7q4kKJ79gVb6q/wT1GgZTBjpGEwXzESMBAGCgmS\nJomT8ixkARkWAmN6MRkwFwYKCZImiZPyLGQBGRYJY2VzbmV0LWNhMQ8wDQYDVQQK\nDAZDRVNORVQxHTAbBgNVBAMMFHZvbXMyLmdyaWQuY2VzbmV0LmN6MA0GCSqGSIb3\nDQEBBQUAAhEAs5BdpusPQqqoPm2Dp7fRUzAiGA8yMDE3MTIyMDE3MTAzOVoYDzIw\nMTcxMjIxMDUxMDM5WjBwMG4GCisGAQQBvkVkZAQxYDBeoC6GLGZlZGNsb3VkLmVn\naS5ldTovL3ZvbXMyLmdyaWQuY2VzbmV0LmN6OjE1MDAyMCwEKi9mZWRjbG91ZC5l\nZ2kuZXUvUm9sZT1OVUxML0NhcGFiaWxpdHk9TlVMTDCCBI0wggRdBgorBgEEAb5F\nZGQKBIIETTCCBEkwggRFMIIEQTCCAymgAwIBAgIICE3qxIWQapQwDQYJKoZIhvcN\nAQEFBQAwWTESMBAGCgmSJomT8ixkARkWAmN6MRkwFwYKCZImiZPyLGQBGRYJY2Vz\nbmV0LWNhMRIwEAYDVQQKDAlDRVNORVQgQ0ExFDASBgNVBAMMC0NFU05FVCBDQSAz\nMB4XDTE3MTEyMTEwNTUxM1oXDTE4MTIyMTEwNTUxM1owXzESMBAGCgmSJomT8ixk\nARkWAmN6MRkwFwYKCZImiZPyLGQBGRYJY2VzbmV0LWNhMQ8wDQYDVQQKDAZDRVNO\nRVQxHTAbBgNVBAMMFHZvbXMyLmdyaWQuY2VzbmV0LmN6MIIBIjANBgkqhkiG9w0B\nAQEFAAOCAQ8AMIIBCgKCAQEAyBhkDJuohJMmEtsKzQeNWwLEUAH9sMqOpBNBHzP8\nBdJ5fvk/lo19g75qoxr3gGEQGmylMv/VshLDJAnnJum1uO+xNps9D1DdUfuLvRVM\nPQGAUD7S7Upx+5A/kKacxifpoLUIHPSLb+bJXHc4G2grUDxdJBIhDm1TF7zozOYd\nl/uadrflN5ad6nmoCc8ZCQTD9nXzfkgr8lI4G408ZzbGWQV3TNxnPZvT3P1x9wAq\nsnm6QcDAlq//VtPwvxbW+q8X7Oldzif9C88VKI8HbIEcxb/Tl1QfLH30W70MgP/Y\n0xdCXBJOThHq6czFutFZcIGVCayu8hTS6qVzB1Q0a09+LQIDAQABo4IBBTCCAQEw\nHQYDVR0OBBYEFOL7dRC2Emf8ykYDEytqKgQVwAcsMAwGA1UdEwEB/wQCMAAwHwYD\nVR0jBBgwFoAU9V0/vJiZix/xSOf+R4dxCaLcukUwJwYDVR0gBCAwHjAMBgoqhkiG\n90wFAgIBMA4GDCsGAQQBvnkBAgIDATA4BgNVHR8EMTAvMC2gK6AphidodHRwOi8v\nY3JsLmNlc25ldC1jYS5jei9DRVNORVRfQ0FfMy5jcmwwDgYDVR0PAQH/BAQDAgWg\nMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAfBgNVHREEGDAWghR2b21z\nMi5ncmlkLmNlc25ldC5jejANBgkqhkiG9w0BAQUFAAOCAQEANENtGHcyD9D/Yjh2\nhsWItKHv2/j7hxjtn1KrDzKvfideAO9fsS9YJtcMuoAKU/41LZBMRiTIEcroK1fg\nQU/xrVauarTNoKqRS8vt7M+YV9hyaSTp7moOHatbB07NmacAgbdntM0lN9yBeX0/\ntZaCFztA6wE0ZmVbyoiJ31+Re7ksnEmYNsR4PedeOQwskU49XlYLVMQzFndfcR9d\nYICmuXwR1QcIpORAcwXSvgdRsU9/xbHu71+pix73NcGEaOASWqXiXvOftcTTYmr0\nadP9aV8VxbSBX0rFsba/WN8QG/X7AoqhgnQjoqHXIogmYDHDTnFQzZMywrS36cah\nC/AGnzAJBgNVHTgEAgUAMB8GA1UdIwQYMBaAFOL7dRC2Emf8ykYDEytqKgQVwAcs\nMA0GCSqGSIb3DQEBBQUAA4IBAQAeAFjA4P+aJ6LVji7EQB9cU0n6jLDcboNA2i1A\nootSCXl9LwO+CHnV4+FPANKIH7GkX8LT260/4q8hQfEQqj/vy/LKoLnWKLiqp6oL\nT1zS3Idjm47saBrv8UYk/WKCe+p4vTGYm6//b7l9DZMPqxzWyw0VEAJcImvCeqeX\nOVRBMZXXRuEMr+g9ha2pL1jS3PUb+BgMNlv0nDpmYBsaKKN/IcZcyfo6oISQDkqg\nnwJW3k5UbbyWo1jtMXRtSOw47JlZf8IoAzkLxNxs4lM0zLzp5y2MxWh+aqfh1T1n\nhy1ij+bOqWsI2ufPijy7ZWd2/2CJZw1BSNEojHwgKT2TWuucMA4GA1UdDwEB/wQE\nAwIEsDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFBmtsZQ7NOZGsl0k4nX64d0C\nIUKcMB0GCCsGAQUFBwEOAQH/BA4wDDAKBggrBgEFBQcVATANBgkqhkiG9w0BAQsF\nAAOCAQEAgMDXd1MRBFfZ6mMAGYK2Ou02ykbrWRQtPAb9YfMDYqGQLsK15jIF39qt\niWwxr840eoLHSp8g1P5lQjRiKp6naqAfxxtRavY5LBzVA+pqWgODUaYVCqex45W4\nH3Pt9lu+/X+NNhdeC+m8Jr/vZSvN1W9EfYotPBsbu6AGTx319Xz/vQaN5DU6+KbX\nJNoQ/iE+cZ0wTsRDT0Q+XlKMETIbHioh/ADzxSsxkUKppy3zV2cM+MSzptVAiL8E\njQauRaOSy38b3FIhXqKUimMA5rwbjHZ02SCVxkA5svT+ZUGzsT6j2H/FxataoGoy\nAaNqgiy9cBXcU9G1eKnuk2fd8LqLkw==\n-----END CERTIFICATE-----\n-----BEGIN RSA PRIVATE KEY-----\nMIICXQIBAAKBgQDNAv75jIDjfK+GyBaEVba2mHLUsPWAfshgaAoPPcSYrVI+6xgT\nZVSPXSv4kQZqano+9tHrITYbkSP1wsePv+WwNa1rWrik61gFtWa1YNbfi7KeC9B3\noMNnB4ny6xonltKhUE5BAO4XdtZzbWOMtaX9GtftOWCSCk3YUiXbXd389wIDAQAB\nAoGADyGKehahqKiaN+Nqrge5fY6Q4xvQctRoq5ziKS/Q48ffCx/E3iGbdR1WUnk5\ntP742M7UvXrtCGnU8p2Wpwhtxkq7mN8mUbcrX3Zn0/lMPyDlpAEdp8llqVl9HocM\nbdkgR3ibmhsmoZXSHbawFvIZ5hBypT6qU9zfvFgwVM6PjfkCQQD02WlEKJf8g3vu\nTshGlHgi+7ZgmvjCyiNvHeSjdLG4wdGoedMi5BrPpIouL9mrf6vbROe0cez/j65D\n16x8wiEzAkEA1lki6k6WxxQe/G1GU/1Cxu8QYZuYAPyPDxvffCaNBPWF4jPeZlXy\n5BMvpeM4Iwhn4c5QcfRitz9zC0C9hNc9LQJBAMX7BiMWr85+grcu/MIVSw7+eXmj\n1YGr8ProMPf6Y7oA/oY7+3069HLxmMm/50HE+jFShghiFkCO7VnuCorWbgECQQCB\nU+rDIIPMvhEsEOqcBnTh/qANpImEHt5aKWEgUUpIsbMEFnObn0Qb5I+dMYlPaeTz\n0z2qY9+j3P6WzYsLuapJAkBMYCWNWjVXGJc+eEA8eC7TwYg5JEcbGwpkfQLhFJvW\nLhzb3LJMEvu0ohOoAv1pCqBZAfPWVb9IIFj3vr7G75i3\n-----END RSA PRIVATE KEY-----\n-----BEGIN CERTIFICATE-----\nMIIFdTCCBF2gAwIBAgIQCvVzuriQonv2BVvqr/BPUTANBgkqhkiG9w0BAQsFADBy\nMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UEBxMJ\nQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBlU2Np\nZW5jZSBQZXJzb25hbCBDQSAzMB4XDTE3MDUxODAwMDAwMFoXDTE4MDYxNzEyMDAw\nMFowgZsxEzARBgoJkiaJk/IsZAEZFgNvcmcxFjAUBgoJkiaJk/IsZAEZFgZ0ZXJl\nbmExEzARBgoJkiaJk/IsZAEZFgN0Y3MxCzAJBgNVBAYTAk5MMSMwIQYDVQQKExpV\nbml2ZXJzaXRlaXQgdmFuIEFtc3RlcmRhbTElMCMGA1UEAwwcUy4gS291bG91emlz\nIHNrb3Vsb3UxQHV2YS5ubDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB\nAL0ldXsREl+e/7lqikqpbb/wk1HbckuE8tSE8taol5O7gHiLV2MjN153pgoJ8cEk\nSt57Qh/AzVCLU0IZYyeY5pYk0MVzXwqMcPek5ZlVN/p4mJxx97oLJaq9lptJriqc\nldAiV8sy2ckC6gGDFV2pu0orA6HhKYJ8UM56dKtnFKz4BQ9MLZN6ruSiAuqWUZJy\nBMzHhQZu5ya6GwfLmfHFJpJFmiZ7LwM7ji0njK6oOkEAVSWFISkRgS5/xZ3ZBmpe\noA47veHfSMn5TvQs3flSZZq0zq+xQuqVkQs6un5c32KaooN2A/QPe7vXX3UzqxMY\nm0LdRxbITBZUwtZKMcxZy6sCAwEAAaOCAdswggHXMB8GA1UdIwQYMBaAFIyfES7m\n43oEpR5Vi0YIBKbtl3CmMB0GA1UdDgQWBBQZrbGUOzTmRrJdJOJ1+uHdAiFCnDAM\nBgNVHRMBAf8EAjAAMB0GA1UdEQQWMBSBElMuS291bG91emlzQHV2YS5ubDAOBgNV\nHQ8BAf8EBAMCBLAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMEMDQGA1Ud\nIAQtMCswDAYKKoZIhvdMBQICATAMBgpghkgBhv1sBB8BMA0GCyqGSIb3TAUCAwMD\nMIGFBgNVHR8EfjB8MDygOqA4hjZodHRwOi8vY3JsMy5kaWdpY2VydC5jb20vVEVS\nRU5BZVNjaWVuY2VQZXJzb25hbENBMy5jcmwwPKA6oDiGNmh0dHA6Ly9jcmw0LmRp\nZ2ljZXJ0LmNvbS9URVJFTkFlU2NpZW5jZVBlcnNvbmFsQ0EzLmNybDB7BggrBgEF\nBQcBAQRvMG0wJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBF\nBggrBgEFBQcwAoY5aHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL1RFUkVOQWVT\nY2llbmNlUGVyc29uYWxDQTMuY3J0MA0GCSqGSIb3DQEBCwUAA4IBAQBjF6FSxMKF\nO3no2/2Bu1/ur4h6vIiKDHqQ6cxcgu9fvBbS6gX01Ov3y2SXHidJdlPf2f+nQMQv\nuo81wOZFGtLgN0SsanbOhhOm63kUIZh4TVMhTL5jJ1ybeVEv97E6iRlk5PwExGNH\nB2u9CDvt3A+cKXC4ieXJPuWnWBLbSGgM9JhlH7BW87hxhs9L0ZBAECPh4W0DbUmn\nBriCUMIw13cMQNvcgddlZ+t8+ABZGtBHjSRL2O6yJCBUWNv1eqlFqXP5I7vDF5ry\nsITe184PVPq9t26cCoGCwpbcxhqGdAmJt5LHNvFCd2pGO8QkdNwCJHNw/2HPWgje\nV5OqUdjlA+8R\n-----END CERTIFICATE-----
...@@ -82,7 +82,7 @@ def handle_delivery(message): ...@@ -82,7 +82,7 @@ def handle_delivery(message):
def test_local(): def test_local():
home = expanduser("~") home = expanduser("~")
transformer = DockerComposeTransformer(home+"/workspace/DRIP/docs/input_tosca_files/BEIA/BEIAv3.yml") transformer = DockerComposeTransformer(home+"/workspace/DRIP/docs/input_tosca_files/MOG/test_tosca2.yml")
vresion = '2'; vresion = '2';
compose = transformer.getnerate_compose(vresion) compose = transformer.getnerate_compose(vresion)
print yaml.dump(compose) print yaml.dump(compose)
......
...@@ -42,7 +42,7 @@ class DockerComposeTransformer: ...@@ -42,7 +42,7 @@ class DockerComposeTransformer:
docker_types = set([]) docker_types = set([])
node_types = self.get_node_types() node_types = self.get_node_types()
for node_type_key in node_types: for node_type_key in node_types:
if node_types[node_type_key] and isinstance(node_types[node_type_key],dict) and'derived_from' in node_types[node_type_key].keys(): if node_types[node_type_key] and isinstance(node_types[node_type_key],dict) and 'derived_from' in node_types[node_type_key].keys():
if node_types[node_type_key]['derived_from'] == self.DOCKER_TYPE: if node_types[node_type_key]['derived_from'] == self.DOCKER_TYPE:
docker_types.add(node_type_key) docker_types.add(node_type_key)
return docker_types return docker_types
...@@ -83,7 +83,7 @@ class DockerComposeTransformer: ...@@ -83,7 +83,7 @@ class DockerComposeTransformer:
port_maps = [] port_maps = []
if 'ports_mapping' in properties: if 'ports_mapping' in properties:
ports_mappings = properties['ports_mapping'] ports_mappings = properties['ports_mapping']
if ports_mappings: if ports_mappings and not isinstance(ports_mappings,str):
for port_map_key in ports_mappings: for port_map_key in ports_mappings:
port_map = '' port_map = ''
if isinstance(ports_mappings,dict): if isinstance(ports_mappings,dict):
...@@ -108,6 +108,11 @@ class DockerComposeTransformer: ...@@ -108,6 +108,11 @@ class DockerComposeTransformer:
# port_map[host_port] = container_port # port_map[host_port] = container_port
port_map = str(host_port)+':'+str(container_port) port_map = str(host_port)+':'+str(container_port)
port_maps.append(port_map) port_maps.append(port_map)
elif isinstance(ports_mappings,str):
host_port = ports_mappings.split(":")[0]
container_port = ports_mappings.split(":")[1]
port_map = str(host_port)+':'+str(container_port)
port_maps.append(port_map)
if 'in_ports' in properties: if 'in_ports' in properties:
ports_mappings = properties['in_ports'] ports_mappings = properties['in_ports']
for port_map_key in ports_mappings: for port_map_key in ports_mappings:
......
...@@ -41,10 +41,10 @@ class DRIPLoggingHandler(RabbitMQHandler): ...@@ -41,10 +41,10 @@ class DRIPLoggingHandler(RabbitMQHandler):
if not self.connection or self.connection.is_closed or not self.channel or self.channel.is_closed: if not self.connection or self.connection.is_closed or not self.channel or self.channel.is_closed:
self.open_connection() self.open_connection()
queue='log_qeue_' + self.user
self.channel.basic_publish( self.channel.basic_publish(
exchange='', exchange='',
routing_key='log_qeue_user', routing_key=queue,
body=self.format(record), body=self.format(record),
properties=pika.BasicProperties( properties=pika.BasicProperties(
delivery_mode=2) delivery_mode=2)
......
...@@ -70,7 +70,7 @@ class DumpPlanner: ...@@ -70,7 +70,7 @@ class DumpPlanner:
vm['name'] = node['id'] vm['name'] = node['id']
vm['type'] = self.COMPUTE_TYPE vm['type'] = self.COMPUTE_TYPE
if 'requirements' in node: if 'requirements' in node and node['requirements']:
for req in node['requirements']: for req in node['requirements']:
if 'host' in req and 'node_filter' in req['host']: if 'host' in req and 'node_filter' in req['host']:
vm['host'] = req['host']['node_filter']['capabilities']['host'] vm['host'] = req['host']['node_filter']['capabilities']['host']
......
...@@ -99,14 +99,16 @@ def handle_delivery(message): ...@@ -99,14 +99,16 @@ def handle_delivery(message):
return json.dumps(response) return json.dumps(response)
if __name__ == "__main__": if __name__ == "__main__":
home = expanduser("~") if(sys.argv[1] == "test_local"):
planner = DumpPlanner(home+"/workspace/DRIP/docs/input_tosca_files/mog_tosca_v1.yml") home = expanduser("~")
print planner.plan() planner = DumpPlanner(home+"/workspace/DRIP/docs/input_tosca_files/MOG/test_tosca2.yml")
# logger.info("Input args: " + sys.argv[0] + ' ' + sys.argv[1] + ' ' + sys.argv[2]) print planner.plan()
# channel = init_chanel(sys.argv) else:
# global queue_name logger.info("Input args: " + sys.argv[0] + ' ' + sys.argv[1] + ' ' + sys.argv[2])
# queue_name = sys.argv[2] channel = init_chanel(sys.argv)
# start(channel) global queue_name
queue_name = sys.argv[2]
start(channel)
# #
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment