Commit e76b8df3 authored by Luca Braun's avatar Luca Braun

Merge remote-tracking branch 'origin/develop' into develop

parents 4cda6e98 98fc8f0f
......@@ -31,7 +31,7 @@ This token is used for authentication as _regular user_ on all microservices cur
adds a blockchain transaction entry for ApplicationType with all the keys and values. These will be converted and stored in our own format for creating multilayers and communities.
# Business Logic Microservice
https://articonf1.itec.aau.at:30420/api/ui
https://articonf1.itec.aau.at:30420/api/ui/
This microservice contains use-case specific informations, like schemas and contexts.
......@@ -41,16 +41,54 @@ This microservice contains use-case specific informations, like schemas and cont
## Context information
```GET https://articonf1.itec.aau.at:30420/api/use-cases/{use-case}/layers``` returns all layers from the schema used for clustering interally.
# Trace Retrieval Microservice
https://articonf1.itec.aau.at:30001/api/ui/
This microservice contains the nodes from the transactions preprocessed as defined in *Schema Information*.
```GET https://articonf1.itec.aau.at:30001/api/use_cases/{use_case}/transactions``` returns all flattened transactions, before splitting them into layers.
# Semantic Linking Microservice
https://articonf1.itec.aau.at:30101/api/ui/
This microservice contains the nodes from the transactions preprocessed as defined in *Schema Information*. Additionally it splits the raw input into multipe layers.
This microservice splits the preprocessed transactions into multipe layers, calling the splitted transaction per layer nodes.
```GET https://articonf1.itec.aau.at:30101/api/use-cases/{use-case}/nodes``` returns all preprocessed transactions, called nodes, before splitting them into layers.
```GET https://articonf1.itec.aau.at:30101/api/use-cases/{use_case}/tables/{table_name}/layers/{layer_name}/nodes ``` returns all splitted transactions, called nodes, for the layer layer_name.
# Role Stage Discovery Microservice
https://articonf1.itec.aau.at:30103/api/ui
https://articonf1.itec.aau.at:30103/api/ui/
This microservice contains the communities based on clusters and similarities between communities. It additionally contains time slices with subsets of clusters, which's transaction happened in the corresponding time window.
Schemas and Input data are supplied by the [Business Logic microservice](https://articonf1.itec.aau.at:30420/api/ui), [Semantic Linking microservice](https://articonf1.itec.aau.at:30101/api/ui/) and [Trace Retrieval microservice](https://articonf1.itec.aau.at:30101/api/ui/).
## Layers
Contains information about the schema copied from the Business Logic microservice.
Returns the Schemas and/or Input data used for calculating the clustering which is further used for calculating the similarity.
```GET https://articonf1.itec.aau.at:30103/api/use-cases/{use_case}/layers``` returns layer infos for the given use-case.
```GET https://articonf1.itec.aau.at:30103/api/use-cases/{use_case}/tables/{table}/layers/{layer_name}``` returns the layer information for only the one layer.
```GET https://articonf1.itec.aau.at:30103/api/use-cases/{use_case}/tables/{table}/layers/{layer_name}/nodes``` contains all the nodes contained in the layer fetched from the Semantic Linking microservice.
## Clusters
Contains the clustering results. Clustering is performed on all nodes inside one layer. Furthermore the clusters are partitioned based on timestamps.
```GET https://articonf1.itec.aau.at:30103/api/use-cases/{use_case}/tables/{table}/layers/{layer_name}/clusters``` returns the identified clusters.
```GET https://articonf1.itec.aau.at:30103/api/use-cases/{use_case}/tables/{table}/layers/{layer_name}/timeslices``` returns the identified clusters partitioned based on their nodes' timestamps.
## RunId
When a similarity computation is executed, it has an associated RunId which is used to uniquely identify that execution.
```GET https://articonf1.itec.aau.at:30103/api/runIds``` returns all RunIds in the db.
## Similarity
Returns the computed similarity. Two clusters belonging to the SAME layer will be given a similarity value by comparing them to another cluster belonging to a DIFFERENT layer. This is done for every cluster in the input data. This querry returns all the calculated similarity values, given the criteria (i.e belonging to a use-case,table etc).
```GET https://articonf1.itec.aau.at:30103/api/use_cases/{use_case}/tables/{table}/clusterSimilarity``` returns all similarity values for the given use-case and table.
This microservice contains the communities based on clusters and similarities between communities. It additionally contains time slices with subsets of clusters, which's transaction happened in the corresponding time.
## Connected Cluster
Intermediary data-structure used only by the function which computes the similarity. Clusters are connected only to other clusters belonging to a DIFFERENT layer.
The endpoints are currently refactored, so please check the Swagger UI autogenerated documentation on its website.
\ No newline at end of file
```GET https://articonf1.itec.aau.at:30103/api/use_cases/{use_case}/tables{table}/connectedClusters``` returns all connected clusters for the given use-case and table.
......@@ -130,7 +130,49 @@ class Repository(MongoRepositoryBase):
if (run_id == None):
entries = super().get_entries(self._connected_clusters_collection, projection={'_id': 0})
else:
entries = super().get_entries(self._similarity_collection, selection={'cluster_runId' : run_id}, projection={'_id': 0})
entries = super().get_entries(self._connected_clusters_collection, selection={'cluster_runId' : run_id}, projection={'_id': 0})
output = []
for ent in entries:
output.append(ent)
return output
# print(ent)
#return [Cluster(cluster_dict=e, from_db=True) for e in entries]
def get_connected_clusters_for_use_case(self,use_case, run_id: str=None):#, layer_name: str):
''' Get Connected Clusters Data given the Use-Case from DB '''
if (run_id == None):
entries = super().get_entries(self._connected_clusters_collection, selection={'cluster_use_case': use_case}, projection={'_id': 0})
else:
entries = super().get_entries(self._connected_clusters_collection, selection={'cluster_runId' : run_id, 'cluster_use_case': use_case}, projection={'_id': 0})
output = []
for ent in entries:
output.append(ent)
return output
# print(ent)
#return [Cluster(cluster_dict=e, from_db=True) for e in entries]
def get_connected_clusters_for_table(self,use_case,table, run_id: str=None):#, layer_name: str):
''' Get Connected Clusters Data given the Use-Case and Table from DB '''
if (run_id == None):
entries = super().get_entries(self._connected_clusters_collection, selection={'cluster_use_case': use_case,'cluster_table': table}, projection={'_id': 0})
else:
entries = super().get_entries(self._connected_clusters_collection, selection={'cluster_runId' : run_id,'cluster_use_case': use_case,'cluster_table': table}, projection={'_id': 0})
output = []
for ent in entries:
output.append(ent)
return output
# print(ent)
#return [Cluster(cluster_dict=e, from_db=True) for e in entries]
def get_connected_clusters_by_name(self,use_case, table, layer_name, run_id: str=None):#, layer_name: str):
''' Get Connected Clusters Data from DB '''
if (run_id == None):
entries = super().get_entries(self._connected_clusters_collection, selection={'cluster_use_case': use_case,'cluster_table': table, 'cluster_layer' : layer_name}, projection={'_id': 0})
else:
entries = super().get_entries(self._connected_clusters_collection, selection={'cluster_runId' : run_id,'cluster_use_case': use_case,'cluster_table': table, 'cluster_layer' : layer_name}, projection={'_id': 0})
output = []
for ent in entries:
......@@ -175,8 +217,38 @@ class Repository(MongoRepositoryBase):
output.append(e)
return output
"""
def get_similarity_use_case(self,skipNr,batchSize,use_case, run_id: str=None):
''' Get Similarity Data from DB '''
if (run_id == None):
entries = super().get_entries(self._similarity_collection, selection={'use_case' : use_case}, projection={'_id': 0})
else:
entries = super().get_entries(self._similarity_collection, selection={'use_case' : use_case, 'runId' : run_id}, projection={'_id': 0})
#
return list(entries.sort([('_id', -1)]).skip(skipNr).limit(batchSize))
def get_similarity_table(self,skipNr,batchSize,use_case,table, run_id: str=None):
''' Get Similarity Data from DB '''
if (run_id == None):
entries = super().get_entries(self._similarity_collection, selection={'use_case' : use_case, 'table': table}, projection={'_id': 0})
else:
entries = super().get_entries(self._similarity_collection, selection={'use_case' : use_case, 'table': table, 'runId' : run_id}, projection={'_id': 0})
#
return list(entries.sort([('_id', -1)]).skip(skipNr).limit(batchSize))
def get_similarity_layer(self,skipNr,batchSize,use_case,table,layer, run_id: str=None):
''' Get Similarity Data from DB '''
if (run_id == None):
entries = super().get_entries(self._similarity_collection, selection={'use_case' : use_case, 'table': table, 'cluster_layer' : layer}, projection={'_id': 0})
else:
entries = super().get_entries(self._similarity_collection, selection={'use_case' : use_case, 'table': table, 'cluster_layer' : layer, 'runId' : run_id}, projection={'_id': 0})
#
return list(entries.sort([('_id', -1)]).skip(skipNr).limit(batchSize))
#endregion
#region connected_run
......
......@@ -67,7 +67,7 @@ def loadJson(url:str) :
return jsonData
def getClusterDataFromMongo(layerNameList,limitNrCluster,limitNrNodes):
def getClusterDataFromMongo(layerNameList,limitNrCluster,limitNrNodes,use_case,table):
''' Calculates the nr of connections/weights between the clusters contained in the "inputLayerDict". Connections are made between clusters from DIFFERENT layers.
:param List[string] layerNameList: Name of the layers to pull from the DB
......@@ -93,7 +93,7 @@ def getClusterDataFromMongo(layerNameList,limitNrCluster,limitNrNodes):
#imports and translates the data from JSON into usefull format
#returns layerdiction -> Layer -> clusterDict -> Cluster -> nodesDict -> Nodes
for name in layerNameList:
newData = get_mongoDB_cluster_by_layerName(name)#repo.get_clusters_for_layer(name)
newData = get_mongoDB_cluster_by_layerName(use_case,table,name)#repo.get_clusters_for_layer(name)
if newData is not None and len(newData) != 0:
layerDict = populateWithNewNodesSingleLayer(newData[0:limitNrCluster],layerDict,limitNrNodes)
......@@ -290,7 +290,7 @@ def makeChangeNodesDict(inputList,cluster_label,cluster_layer):
outputDict[key]= newNode
return outputDict
def get_mongoDB_cluster_by_layerName(name):
res = repo.get_clusters_for_layer(name)
def get_mongoDB_cluster_by_layerName(use_case, table , layer_name):
res = repo.get_clusters_for_layer(use_case, table, layer_name)
return [c.to_serializable_dict() for c in res]
......@@ -6,7 +6,7 @@ from processing.similarityFiles.miscFunctions import *
from db.repository import Repository
repo = Repository()
def outputFileLayerFunction(layerDict,limitNrNodes,limitNrCluster,runId):
def outputFileLayerFunction(layerDict,limitNrNodes,limitNrCluster,runId,table,use_case):
''' Writes the layerDict data to a JSON file.
:param Dict{string: Layer} layerDict: Object which contains Data about the Layers, Clusters and Nodes
......@@ -17,7 +17,7 @@ def outputFileLayerFunction(layerDict,limitNrNodes,limitNrCluster,runId):
'''
layerJSON = convertLayerDictToJSON(layerDict,runId)
layerJSON = convertLayerDictToJSON(layerDict,runId,table,use_case)
outputJSON = json.dumps(layerJSON, default=lambda o: o.__dict__, indent=4)
try:
......@@ -28,7 +28,7 @@ def outputFileLayerFunction(layerDict,limitNrNodes,limitNrCluster,runId):
def outputFileSimilFunction(similarityDict,limitNrNodes,limitNrCluster,runId):
def outputFileSimilFunction(similarityDict,limitNrNodes,limitNrCluster,runId,table,use_case):
''' Writes the similarityDict data to a JSON file.
......@@ -40,7 +40,7 @@ def outputFileSimilFunction(similarityDict,limitNrNodes,limitNrCluster,runId):
'''
similJSON = convertSimilarityDictToJSON(similarityDict,runId)
similJSON = convertSimilarityDictToJSON(similarityDict,runId,table,use_case)
outputJSON = json.dumps(similJSON, default=lambda o: o.__dict__, indent=4)
try:
......@@ -77,7 +77,7 @@ def outputFileTimeFunction(timelist,limitNrNodes,limitNrCluster,runId):
print("Error occured when writing the resultTimeExec file")
def outputMongoConnClustDict(inputDict,runId):
def outputMongoConnClustDict(inputDict,runId,table,use_case):
''' Stores connected_clusters in the database.
......@@ -89,9 +89,9 @@ def outputMongoConnClustDict(inputDict,runId):
#inputDict["Timestamp"] = str(datetime.datetime.now())
add_conn_clusters(inputDict,runId)
add_conn_clusters(inputDict,runId,table,use_case)
def outputMongoSimilarity(inputDict,runId):
def outputMongoSimilarity(inputDict,runId,table,use_case):
''' Stores cluster_similarity in the database.
:param Dict() inputDict: Contains the data to insert
......@@ -99,7 +99,7 @@ def outputMongoSimilarity(inputDict,runId):
:param string runId: Id of the Run
'''
add_similarity(inputDict,runId)
add_similarity(inputDict,runId,table,use_case)
def add_connected_run():
......@@ -116,7 +116,7 @@ def add_connected_run():
inserted_result = repo.add_connected_run(runDict)
return str(inserted_result.inserted_id)
def add_conn_clusters(inputDict,runId):
def add_conn_clusters(inputDict,runId,table,use_case):
''' Stores connected_clusters in the database.
:param Dict() inputDict: Contains the data to insert
......@@ -125,11 +125,11 @@ def add_conn_clusters(inputDict,runId):
'''
outputJSON = convertLayerDictToJSON(inputDict,runId)
outputJSON = convertLayerDictToJSON(inputDict,runId,table,use_case)
for element in outputJSON:
repo.add_connected_cluster(element)
def add_similarity(inputDict,runId):
def add_similarity(inputDict,runId,table,use_case):
''' Stores cluster_similarity in the database.
:param Dict() inputDict: Contains the data to insert
......@@ -138,6 +138,6 @@ def add_similarity(inputDict,runId):
'''
outputJSON = convertSimilarityDictToJSON(inputDict,runId)
outputJSON = convertSimilarityDictToJSON(inputDict,runId,table,use_case)
for element in outputJSON:
repo.add_single_similarity(element)
\ No newline at end of file
......@@ -42,7 +42,7 @@ def totalNumberOfClusters(inputLayerDict):
return clustCount
def convertLayerDictToJSON(layerDict, runId):
def convertLayerDictToJSON(layerDict, runId,table,use_case):
''' Converts a Layer object to JSON format.
:param Dict{string: Layer} layerDict: Object which contains Data about the Layers, Clusters and Nodes
......@@ -57,6 +57,8 @@ def convertLayerDictToJSON(layerDict, runId):
outputJSON.append({
"cluster_label" : curCluster.cluster_label,
"cluster_layer" : curCluster.cluster_layer,
"cluster_table" : table,
"cluster_use_case": use_case,
"cluster_runId" : runId,
"cluster_connClustDict" : changeTupleDictToDictList(curCluster.cluster_connClustDict),
"cluster_connNodesDict" : getFrozensetFromConnNodesDict(curCluster.cluster_connNodesDict), #Don
......@@ -109,7 +111,7 @@ def getFrozensetFromConnNodesDict(inputDict):
return output
def convertSimilarityDictToJSON(inputDict,runId):
def convertSimilarityDictToJSON(inputDict,runId,table,use_case):
''' Converts a Similarity Dictionary to JSON format. For outputting to DB
:param Dict{} similarityDict: Object which contains Data about the Computed similarities between Clusters
......@@ -125,6 +127,8 @@ def convertSimilarityDictToJSON(inputDict,runId):
auxDict["cluster_layer"] = tupleKey[2]
auxDict["similarityValues"] = inputDict[tupleKey]
auxDict["runId"] = runId
auxDict["table"] = table
auxDict["use_case"] = use_case
similList.append(auxDict)
similToJSON = similList
#outputJSON = json.dumps(similToJSON, default=lambda o: o.__dict__, indent=4)
......
......@@ -39,7 +39,7 @@ from processing.similarityFiles.dataOutput import *
outputToFileFLAG = True
def main(layerNameList:List[str] = ["Price_Layer","FinishedTime_Layer","Destination_Layer"]):
def main(layerNameList:List[str] , table:str , use_case: str):
'''
Executes the similarity calculation by calculating weights between clusters in different layers.
Then calculating the Euclidean distance between nodes in the same layer based on one other layer each.
......@@ -48,7 +48,8 @@ def main(layerNameList:List[str] = ["Price_Layer","FinishedTime_Layer","Destinat
:param layerNameList: The list of layer names as strings
'''
print("Entered Similarity Main")
if len(layerNameList)==0:
return
timelist = []
timelist.append(currentTime())#starting time
......@@ -67,7 +68,7 @@ def main(layerNameList:List[str] = ["Price_Layer","FinishedTime_Layer","Destinat
limitNrNodes = -1 #per Layer
layerDict = getClusterDataFromMongo(layerNameList,limitNrCluster,limitNrNodes)
layerDict = getClusterDataFromMongo(layerNameList,limitNrCluster,limitNrNodes,use_case,table)
if layerDict is None or len(layerDict) == 0:
LOGGER.error(f"No data for any of the following layers existed: {str(layerNameList)}. Similarity calculation was not performed.")
return
......@@ -98,13 +99,13 @@ def main(layerNameList:List[str] = ["Price_Layer","FinishedTime_Layer","Destinat
if (outputToFileFLAG == True):
print("Outputing data")
outputFileLayerFunction(layerDict,totalNodes,totalClusters,runId)
outputFileSimilFunction(similarityDict,totalNodes,totalClusters,runId)
outputFileLayerFunction(layerDict,totalNodes,totalClusters,runId,table,use_case)
outputFileSimilFunction(similarityDict,totalNodes,totalClusters,runId,table,use_case)
outputFileTimeFunction(timelist,totalNodes,totalClusters,runId)
#Output to DB
outputMongoConnClustDict(layerDict,runId)
outputMongoSimilarity(similarityDict,runId)
outputMongoConnClustDict(layerDict,runId,table,use_case)
outputMongoSimilarity(similarityDict,runId,table,use_case)
#Currently not used in the calculation of connections/similarity, developed for possible future uses
......@@ -122,6 +123,6 @@ def main(layerNameList:List[str] = ["Price_Layer","FinishedTime_Layer","Destinat
return
##########START##########
if __name__ is '__main__':
main()
#if __name__ is '__main__':
#main()
#########FINISH##########
......@@ -4,8 +4,8 @@ from db.entities import ClusterSet
repo = Repository()
def get_by_name(use_case, use_case_table, name):
res = repo.get_clusters_for_layer(use_case, use_case_table, name)
def get_by_name(use_case, table, layer_name):
res = repo.get_clusters_for_layer(use_case, table, layer_name)
if res is None or len(res) == 0:
return Response(status=404)
else:
......
......@@ -16,3 +16,45 @@ def get_conn_clusters():
else:
return result
def get_conn_clusters_use_case(use_case):
''' Gets connected_clusters from the database.
:returns: Returns similarity objects from the DB
:rtype: Dict
'''
result = repo.get_connected_clusters_for_use_case(use_case)
if result is None or len(result) == 0:
print("MongoDb Get Error: Response 404")
return Response(status=404)
else:
return result
def get_conn_clusters_table(use_case,table):
''' Gets connected_clusters from the database.
:returns: Returns similarity objects from the DB
:rtype: Dict
'''
result = repo.get_connected_clusters_for_table(use_case, table)
if result is None or len(result) == 0:
print("MongoDb Get Error: Response 404")
return Response(status=404)
else:
return result
def get_conn_clusters_name(use_case,table,layer_name):
''' Gets connected_clusters from the database.
:returns: Returns similarity objects from the DB
:rtype: Dict
'''
result = repo.get_connected_clusters_by_name(use_case,table,layer_name)
if result is None or len(result) == 0:
print("MongoDb Get Error: Response 404")
return Response(status=404)
else:
return result
......@@ -26,15 +26,15 @@ def get_by_use_case(use_case):
else:
return Response(status=404)
def get_by_table(use_case, use_case_table):
res = repo.get_layers_for_table(use_case, use_case_table)
def get_by_table(use_case, table):
res = repo.get_layers_for_table(use_case, table)
if len(res) > 0:
return [l.to_serializable_dict() for l in res]
else:
return Response(status=404)
def get_by_name(use_case, use_case_table, name):
res = repo.get_layer_by_name(use_case, use_case_table, name)
def get_by_name(use_case, table, layer_name):
res = repo.get_layer_by_name(use_case, table, layer_name)
if res is not None:
return res.to_serializable_dict()
else:
......@@ -43,8 +43,8 @@ def get_by_name(use_case, use_case_table, name):
#endregion
#region nodes
def get_nodes(use_case, use_case_table, name):
res = repo.get_layer_nodes(use_case, use_case_table, name)
def get_nodes(use_case, table, layer_name):
res = repo.get_layer_nodes(use_case, table, layer_name)
# print(res)
return res
......
......@@ -23,3 +23,60 @@ def get_similarity(layer_name,batchNr):
return Response(status=404)
else:
return result
def get_similarity_use_case(use_case,batchNr):
''' Gets cluster_similarity from the database.
:returns: Returns similarity objects from the DB
:rtype: Dict
'''
batchSize = 1000
if int(batchNr)<0:
print("Batch number needs to be a positive integer")
return Response(status=404)
skipNr = batchSize*int(batchNr)
#get_similarity(self,skipNr,batchSize, cluster_layer: str= None, run_id: str=None)
result = repo.get_similarity_use_case(skipNr, batchSize, use_case)
if result is None or len(result) == 0:
print("MongoDb Get Error: Response 404")
return Response(status=404)
else:
return result
def get_similarity_table(use_case,table,batchNr):
''' Gets cluster_similarity from the database.
:returns: Returns similarity objects from the DB
:rtype: Dict
'''
batchSize = 1000
if int(batchNr)<0:
print("Batch number needs to be a positive integer")
return Response(status=404)
skipNr = batchSize*int(batchNr)
#get_similarity(self,skipNr,batchSize, cluster_layer: str= None, run_id: str=None)
result = repo.get_similarity_table(skipNr, batchSize, use_case,table)
if result is None or len(result) == 0:
print("MongoDb Get Error: Response 404")
return Response(status=404)
else:
return result
def get_similarity_layer(use_case,table,layer_name,batchNr):
''' Gets cluster_similarity from the database.
:returns: Returns similarity objects from the DB
:rtype: Dict
'''
batchSize = 1000
if int(batchNr)<0:
print("Batch number needs to be a positive integer")
return Response(status=404)
skipNr = batchSize*int(batchNr)
#get_similarity(self,skipNr,batchSize, cluster_layer: str= None, run_id: str=None)
result = repo.get_similarity_layer(skipNr, batchSize,use_case,table, layer_name)
if result is None or len(result) == 0:
print("MongoDb Get Error: Response 404")
return Response(status=404)
else:
return result
......@@ -4,8 +4,8 @@ from db.entities import TimeSlice
repo = Repository()
def get_by_name(use_case, use_case_table, name):
res = repo.get_time_slices_by_name(use_case, use_case_table, name)
def get_by_name(use_case, table, layer_name):
res = repo.get_time_slices_by_name(use_case, table, layer_name)
if res is not None and len(res) != 0:
return [e.to_serializable_dict() for e in res]
......
......@@ -7,17 +7,38 @@ repo = Repository()
def run_similarity_calc_per_use_case():
layers = repo.get_layers()
uc_layers = {}
# uc_layers = {}
# for layer in layers:
# uc = layer.use_case
# if uc not in uc_layers:
# uc_layers[uc] = []
# uc_layers[uc].append(layer.layer_name)
# for key in uc_layers:
# layers2 = uc_layers[key]
# print(f"Running for use case {key} with layers {str(layers2)}.")
# SimilarityCalc.main(layerNameList=layers2)
uc_dict = dict()
# use_case[table[layer_name]]
for layer in layers:
uc = layer.use_case
if uc not in uc_layers:
uc_layers[uc] = []
uc_layers[uc].append(layer.layer_name)
for key in uc_layers:
layers = uc_layers[key]
print(f"Running for use case {key} with layers {str(layers)}.")
SimilarityCalc.main(layerNameList=layers)
use_case = layer.use_case
table = layer.use_case_table
if use_case not in uc_dict:
uc_dict[use_case] = dict()
#aux = uc_dict[use_case]
if table not in uc_dict[use_case]:
uc_dict[use_case][table] = []
uc_dict[use_case][table].append(layer.layer_name)
for uc in uc_dict:
for table in uc_dict[uc]:
layers2 = uc_dict[uc][table]
print(f"Running for use case {uc}, table {table}, with layers {str(layers2)}.")
SimilarityCalc.main(layers2,table,uc)
if __name__ == '__main__':
......
......@@ -9,12 +9,14 @@ for modules_path in modules_paths:
sys.path.insert(1, modules_path)
from messaging.MessageHandler import MessageHandler
from db.repository import Repository
# file to read the data from
CSV_FILE = r'Energy_Dataset.csv'
handler = MessageHandler()
CSV_FILE = r'dummy_upload\smart_energy\Energy_Dataset.csv'
handler = MessageHandler(Repository())
processed_transactions = []
def upload_transaction(transaction):
'''{"type": "new-trace",
"content": {"use_case": "smart-energy", "table": "smart-energy", "id": "dd2c5146c919b046d77a32a5cf553d5133163562f7b7e1298c878c575d516025",
......@@ -28,7 +30,31 @@ def upload_transaction(transaction):
'id': uid,
'properties': transaction,
}
handler.handle_new_trace(t)
# handler.handle_new_trace(t)
processed_transactions.append(t)
def store_transactions_for_mirsat():
'''
Stores the processed transactions as if they would be returned
by Trace Retrieval microservice after fixing the message queue bug.
'''
flattened_transactions = []
for transaction in processed_transactions:
transaction = transaction['properties']
transaction['use_case'] = transaction['ApplicationType']
del transaction['ApplicationType']
transaction['table'] = transaction['docType']
del transaction['docType']
flattened_transactions.append(transaction)
import json
with open('flattened_smart_energy_data.json', 'w') as file:
file.write(json.dumps(flattened_transactions))
if __name__ == '__main__':
......@@ -56,3 +82,4 @@ if __name__ == '__main__':
upload_transaction(transaction)
store_transactions_for_mirsat()
\ No newline at end of file
......@@ -90,6 +90,7 @@ class MessageHandler:
for prop in layer.total_properties:
node[prop] = content["properties"][prop]
node["layer_name"] = layer.layer_name
node["table"] = layer.table
node["use_case"] = layer.use_case
......
......@@ -11,6 +11,7 @@ from typing import List
class DummyMongoRepo:
'''Dummy class to be used for testing the MessageHandler'''
last_trace = None
layernodes = []
def insert_trace(self, trace):
self.last_trace = trace
......@@ -34,7 +35,9 @@ class DummyMongoRepo:
]
def add_layer_nodes(self, nodes: List):
pass
self.layernodes.extend(nodes)
return
class Test_Pipeline(unittest.TestCase):
handler = None
......@@ -68,6 +71,7 @@ class Test_Pipeline(unittest.TestCase):
def testTraceProcessing(self):
msg = self._buildTraceMessage()
self.handler.handle_new_trace(msg["content"])
self.assertEqual(len(self.handler._repository.layernodes),1)
if __name__ == '__main__':
......
from _add_use_case_scripts.car_sharing.tables.requestPost import postLayersToSwagger, postTableToSwagger
from _add_use_case_scripts.requestPost import postLayersToSwagger, postTableToSwagger
def add_table(use_case: str, table_name:str):
'''
......
from _add_use_case_scripts.car_sharing.tables.requestPost import postLayersToSwagger, postTableToSwagger
from _add_use_case_scripts.requestPost import postLayersToSwagger, postTableToSwagger
def add_table(use_case: str, table_name: str):
'''
......
from _add_use_case_scripts.car_sharing.tables.requestPost import postLayersToSwagger, postTableToSwagger
from _add_use_case_scripts.requestPost import postLayersToSwagger, postTableToSwagger
def add_table(use_case: str, table_name: str):
'''
......
from _add_use_case_scripts.car_sharing.tables.requestPost import postLayersToSwagger, postTableToSwagger
from _add_use_case_scripts.requestPost import postLayersToSwagger, postTableToSwagger
def add_table(use_case: str, table_name: str):
'''
......
from _add_use_case_scripts.car_sharing.tables.requestPost import postLayersToSwagger, postTableToSwagger
from _add_use_case_scripts.requestPost import postLayersToSwagger, postTableToSwagger
def add_table(use_case: str, table_name: str):
'''
......
from _add_use_case_scripts.car_sharing.tables.requestPost import postLayersToSwagger, postTableToSwagger
from _add_use_case_scripts.requestPost import postLayersToSwagger, postTableToSwagger
def add_table(use_case: str, table_name: str):
......
import sys
import os
from pathlib import Path
from typing import Dict, Any
import requests
modules_paths = ['.', '../../../modules/']
for modules_path in modules_paths:
if os.path.exists(modules_path):
sys.path.insert(1, modules_path)
from _add_use_case_scripts.crowd_journalism.tables import add_video,add_tag,add_purchase,add_event,add_classification
import network_constants as nc
from security.token_manager import TokenManager
def add_use_case(use_case: str):
jwt = TokenManager.getInstance().getToken()
url = f"https://articonf1.itec.aau.at:30420/api/use-cases"
response = requests.post(
url,
verify=False,
proxies = { "http":None, "https":None },
headers = { "Authorization": f"Bearer {jwt}"},
json = {"name": use_case}
)
print(url+": "+str(response.content))
if __name__ == "__main__":
use_case = "crowd-journalism"
# disable ssl warnings :)
requests.packages.urllib3.disable_warnings()
add_use_case(use_case)
add_video.main(use_case)
add_tag.main(use_case)
add_classification.main(use_case)
add_event.main(use_case)
add_purchase.main(use_case)
\ No newline at end of file
from _add_use_case_scripts.requestPost import postLayersToSwagger, postTableToSwagger
def add_table(use_case: str, table_name: str):
'''
take the columns and add the mappings at the server
replace all "/"'s in the internal representation with a "_"
'''
columns = [
# "docType",
"objecttype",
"userid",
"videoid",
"informative",
"impact",
"trustiness",
"lastupdate"
]
columns = { c : c for c in columns }
columns["UniqueID"] = "userid+videoid"
table = {
"name": table_name,
"mappings": columns
}
postTableToSwagger(use_case,table)
def add_layers(use_case:str, table_name: str):
layers = [
{ #Useless as all objects aree Classification?
"use_case": use_case,
"table": table_name,
"name": "Object_Type_Layer",
"properties": [
"UniqueID",
"objecttype",
"userid",
"videoid",
"informative",
"impact",
"trustiness",
"lastupdate"
],
"cluster_properties": [
"objecttype"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Informative_Layer",
"properties": [
"UniqueID",
"objecttype",
"userid",
"videoid",
"informative",
"impact",
"trustiness",
"lastupdate"
],
"cluster_properties": [
"informative"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Impact_Layer",
"properties": [
"UniqueID",
"objecttype",
"userid",
"videoid",
"informative",
"impact",
"trustiness",
"lastupdate"
],
"cluster_properties": [
"impact"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Trust_Layer",
"properties": [
"UniqueID",
"objecttype",
"userid",
"videoid",
"informative",
"impact",
"trustiness",
"lastupdate"
],
"cluster_properties": [
"trustiness"
]
}
]
postLayersToSwagger(use_case,layers)
def main(use_case: str):
print("Classification")
table_name = "classification"
add_table(use_case,table_name)
add_layers(use_case,table_name)
\ No newline at end of file
from _add_use_case_scripts.requestPost import postLayersToSwagger, postTableToSwagger
def add_table(use_case: str, table_name: str):
'''
take the columns and add the mappings at the server
replace all "/"'s in the internal representation with a "_"
'''
#TODO: split eventEpicenter
#TODO: tags is an array, deal with arrays
columns = [
# "docType",
"objecttype",
"eventid",
#"tags",
"eventEpicenter", #TODO
"range"
]
columns = { c : c for c in columns }
columns["UniqueID"] = "eventid"
columns["firstTag"] = "tags[0]"
table = {
"name": table_name,
"mappings": columns
}
postTableToSwagger(use_case,table)
def add_layers(use_case:str, table_name: str):
layers = [
{ #Useless as all objects are of the same type???
"use_case": use_case,
"table": table_name,
"name": "Object_Type_Layer",
"properties": [
"UniqueID",
"objecttype",
"eventid",
"eventEpicenter",
"range",
"firstTag"
],
"cluster_properties": [
"objecttype"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Tag_Layer",
"properties": [
"UniqueID",
"objecttype",
"eventid",
"evenEpicenter",
"range",
"firstTag"
],
"cluster_properties": [
"firstTag"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Event_Epicenter_Layer",
"properties": [
"UniqueID",
"objecttype",
"eventid",
"evenEpicenter",
"range",
"firstTag"
],
"cluster_properties": [
"eventEpicenter"
]
}
]
postLayersToSwagger(use_case,layers)
def main(use_case: str):
print("event")
table_name = "event"
add_table(use_case,table_name)
add_layers(use_case,table_name)
\ No newline at end of file
from _add_use_case_scripts.requestPost import postLayersToSwagger, postTableToSwagger
def add_table(use_case: str, table_name: str):
'''
take the columns and add the mappings at the server
replace all "/"'s in the internal representation with a "_"
'''
columns = [
# "docType",
"objecttype",
"timestamp",
"userid",
"videoid",
"price",
"ownerid"
]
columns = { c : c for c in columns }
columns["UniqueID"] = "userid+videoid+ownerid"
table = {
"name": table_name,
"mappings": columns
}
postTableToSwagger(use_case,table)
def add_layers(use_case:str, table_name: str):
layers = [
{ #Useless as all objects aree Classification?
"use_case": use_case,
"table": table_name,
"name": "Object_Type_Layer",
"properties": [
"UniqueID",
"objecttype",
"timestamp",
"userid",
"videoid",
"price",
"ownerid"
],
"cluster_properties": [
"objecttype"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Price_Layer",
"properties": [
"UniqueID",
"objecttype",
"timestamp",
"userid",
"videoid",
"price",
"ownerid"
],
"cluster_properties": [
"price"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Owner_Layer",
"properties": [
"UniqueID",
"objecttype",
"timestamp",
"userid",
"videoid",
"price",
"ownerid"
],
"cluster_properties": [
"ownerid"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Buyer_Layer",
"properties": [
"UniqueID",
"objecttype",
"timestamp",
"userid",
"videoid",
"price",
"ownerid"
],
"cluster_properties": [
"userid"
]
}
]
postLayersToSwagger(use_case,layers)
def main(use_case: str):
print("purchase")
table_name = "purchase"
add_table(use_case,table_name)
add_layers(use_case,table_name)
\ No newline at end of file
from _add_use_case_scripts.requestPost import postLayersToSwagger, postTableToSwagger
def add_table(use_case: str, table_name: str):
'''
take the columns and add the mappings at the server
replace all "/"'s in the internal representation with a "_"
'''
columns = [
# "docType",
"objecttype",
"tag"
]
columns = { c : c for c in columns }
columns["UniqueID"] = "objecttype+tag"
table = {
"name": table_name,
"mappings": columns
}
postTableToSwagger(use_case,table)
def add_layers(use_case:str, table_name: str):
layers = [
{ #Useless as all objects aree the same type??
"use_case": use_case,
"table": table_name,
"name": "Object_Type_Layer",
"properties": [
"UniqueID",
"objecttype",
"tag"
],
"cluster_properties": [
"objecttype"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Tag_Layer",
"properties": [
"UniqueID",
"objecttype",
"tag"
],
"cluster_properties": [
"tag"
]
}
]
postLayersToSwagger(use_case,layers)
def main(use_case: str):
print("tag")
table_name = "tag"
add_table(use_case,table_name)
add_layers(use_case,table_name)
\ No newline at end of file
from _add_use_case_scripts.requestPost import postLayersToSwagger, postTableToSwagger
def add_table(use_case: str, table_name: str):
'''
take the columns and add the mappings at the server
replace all "/"'s in the internal representation with a "_"
'''
columns = [
# "docType",
"objecttype",
"videoid",
"duration",
"price",
"creator",
"creationTimestamp",
#"tags",
"geolocation",
"eventid",
"lastupdate",
"md5",
"informativeRating",
"impactRating",
"trustinessRating",
"ready",
"path",
"preview",
#"thumbnails" #not important?
]
columns = { c : c for c in columns }
columns["UniqueID"] = "videoid"
columns["encodedAudio"] = "codec//audio"
columns["encodedVideo"] = "codec//video"
columns["firstTag"] = "tags[0]"
table = {
"name": table_name,
"mappings": columns
}
postTableToSwagger(use_case,table)
def add_layers(use_case:str, table_name: str):
layers = [
{ #Useless as all objects are Classification?
"use_case": use_case,
"table": table_name,
"name": "Object_Type_Layer",
"properties": [
"objecttype",
"videoid",
"duration",
"price",
"creator",
"creationTimestamp",
"lastupdate",
"firstTag"
],
"cluster_properties": [
"objecttype"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Price_Layer",
"properties": [
"UniqueID",
"objecttype",
"creationTimestamp",
"geolocation",
"videoid",
"price",
"informativeRating",
"impactRating",
"trustinessRating",
"firstTag"
],
"cluster_properties": [
"price"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Tag_Layer",
"properties": [
"UniqueID",
"objecttype",
"creationTimestamp",
"geolocation",
"videoid",
"price",
"informativeRating",
"impactRating",
"trustinessRating",
"firstTag"
],
"cluster_properties": [
"firstTag"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Informative_Layer",
"properties": [
"UniqueID",
"objecttype",
"creationTimestamp",
"geolocation",
"videoid",
"price",
"informativeRating",
"impactRating",
"trustinessRating",
"firstTag"
],
"cluster_properties": [
"informativeRating"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Impact_Layer",
"properties": [
"UniqueID",
"objecttype",
"creationTimestamp",
"geolocation",
"videoid",
"price",
"informativeRating",
"impactRating",
"trustinessRating",
"firstTag"
],
"cluster_properties": [
"impactRating"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Trust_Layer",
"properties": [
"UniqueID",
"objecttype",
"creationTimestamp",
"geolocation",
"videoid",
"price",
"informativeRating",
"impactRating",
"trustinessRating",
"firstTag"
],
"cluster_properties": [
"trustinessRating"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Location_Layer",
"properties": [
"UniqueID",
"objecttype",
"creationTimestamp",
"geolocation",
"videoid",
"price",
"informativeRating",
"impactRating",
"trustinessRating",
"firstTag"
],
"cluster_properties": [
"geolocation"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Video_Age_Layer",
"properties": [
"UniqueID",
"objecttype",
"creationTimestamp",
"geolocation",
"videoid",
"price",
"informativeRating",
"impactRating",
"trustinessRating",
"firstTag"
],
"cluster_properties": [
"creationTimestamp"
]
}
]
postLayersToSwagger(use_case,layers)
def main(use_case: str):
print("video")
table_name = "video"
add_table(use_case,table_name)
add_layers(use_case,table_name)
......@@ -2,6 +2,7 @@
import network_constants as nc
from security.token_manager import TokenManager
import requests
from typing import List
def postTableToSwagger(use_case:str, table:dict ):
......@@ -20,7 +21,7 @@ def postTableToSwagger(use_case:str, table:dict ):
print(url+": "+str(response.status_code)+" MSG:"+str(response.content))
def postLayersToSwagger(use_case:str, layers:dict):
def postLayersToSwagger(use_case:str, layers: List[dict]):
jwt = TokenManager.getInstance().getToken()
......
import sys
import os
from pathlib import Path
from typing import Dict, Any
import requests
modules_paths = ['.', '../../../modules/']
for modules_path in modules_paths:
if os.path.exists(modules_path):
sys.path.insert(1, modules_path)
from _add_use_case_scripts.vialog.tables import add_user, add_video
import network_constants as nc
from security.token_manager import TokenManager
def add_use_case(use_case: str):
#use_case = "vialog"
jwt = TokenManager.getInstance().getToken()
url = f"https://articonf1.itec.aau.at:30420/api/use-cases"
response = requests.post(
url,
verify=False,
proxies = { "http":None, "https":None },
headers = { "Authorization": f"Bearer {jwt}"},
json = {"name": use_case}
)
print(url+": "+str(response.content))
if __name__ == "__main__":
use_case = "vialog"
# disable ssl warnings :)
requests.packages.urllib3.disable_warnings()
add_use_case(use_case)
add_user.main(use_case)
add_video.main(use_case)
\ No newline at end of file
from _add_use_case_scripts.requestPost import postLayersToSwagger, postTableToSwagger
def add_table(use_case: str, table_name: str):
'''
take the columns and add the mappings at the server
replace all "/"'s in the internal representation with a "_"
'''
columns = [
# "docType",
"userId",
"rewardBalance"
]
columns = { c : c for c in columns }
columns["UniqueID"] = "userId"
table = {
"name": table_name,
"mappings": columns
}
postTableToSwagger(use_case,table)
def add_layers(use_case:str, table_name: str):
layers = [
{
"use_case": use_case,
"table": table_name,
"name": "User_Layer",
"properties": [
"UniqueID",
"rewardBalance"
],
"cluster_properties": [
"UniqueID",
]
},
{
"use_case": use_case,
"table": table_name,
"name": "User_Balance_Layer",
"properties": [
"UniqueID",
"rewardBalance"
],
"cluster_properties": [
"rewardBalance"
]
}
]
postLayersToSwagger(use_case,layers)
def main(use_case: str):
print("user")
table_name = "user"
add_table(use_case,table_name)
add_layers(use_case,table_name)
\ No newline at end of file
from _add_use_case_scripts.requestPost import postLayersToSwagger, postTableToSwagger
def add_table(use_case: str, table_name: str):
'''
take the columns and add the mappings at the server
replace all "/"'s in the internal representation with a "_"
'''
columns = [
# "docType",
"videoId",
"Video_Token",
"replyTo",
"Created",
"Duration",
"videoResolution",
"Label",
"ThreadId",
"Position",
"ModifiedDate",
"Views",
"ModeratedBy",
"CommunityManagerNotes",
"Rewards",
"Video_State",
"Video_Type"
]
columns = { c : c for c in columns }
columns["UniqueID"] = "videoId"
table = {
"name": table_name,
"mappings": columns
}
postTableToSwagger(use_case,table)
def add_layers(use_case:str, table_name: str):
layers = [
{
"use_case": use_case,
"table": table_name,
"name": "Manager_Layer",
"properties": [
"UniqueID",
"ModifiedDate",
"ModeratedBy",
"Video_State",
"Video_Type"
],
"cluster_properties": [
"ModeratedBy",
"Video_State"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Video_Popularity_Layer",
"properties": [
"UniqueID",
"Label",
"Created",
"Views",
"Rewards",
"Video_State",
"Video_Type"
],
"cluster_properties": [
"Views",
"Video_Type"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Video_Age_Layer",
"properties": [
"UniqueID",
"Label",
"Created",
"Views",
"Rewards",
"Video_State",
"Video_Type"
],
"cluster_properties": [
"Created"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Rewards_Layer",
"properties": [
"UniqueID",
"Label",
"Created",
"Views",
"Rewards",
"Video_State",
"Video_Type"
],
"cluster_properties": [
"Rewards"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Video_Lenght_Layer",
"properties": [
"UniqueID",
"Created",
"Views",
"Duration",
"Video_State",
"Video_Type"
],
"cluster_properties": [
"Duration"
]
},
{
"use_case": use_case,
"table": table_name,
"name": "Video_Resolution_Layer",
"properties": [
"UniqueID",
"Created",
"Views",
"videoResolution",
"Video_State",
"Video_Type"
],
"cluster_properties": [
"videoResolution"
]
}
]
postLayersToSwagger(use_case,layers)
def main(use_case: str):
print("Video")
table_name = "video"
add_table(use_case,table_name)
add_layers(use_case,table_name)
\ No newline at end of file
......@@ -62,6 +62,39 @@ paths:
responses:
'200':
description: "Successful Request"
/use_cases/{use_case}/transactions-duplicated:
delete:
security:
- JwtRegular: []
operationId: "routes.transactions.delete_all_duplicated_for_use_case"
tags:
- "Transactions"
summary: "Deletes all duplicated Transactions in the given Use-Case"
description: "Deletes all duplicated Transactions in the given Use-Case"
parameters:
- in: path
name: "use_case"
required: true
type: "string"
responses:
'200':
description: "Successful Request"
get:
security:
- JwtRegular: []
operationId: "routes.transactions.all_duplicated_for_use_case"
tags:
- "Transactions"
summary: "Retrieves all duplicated Transactions in the given Use-Case"
description: "Retrieves all duplicated Transactions in the given Use-Case"
parameters:
- in: path
name: "use_case"
required: true
type: "string"
responses:
'200':
description: "Successful Request"
/debug:
post:
......
......@@ -18,6 +18,7 @@ class Repository(MongoRepositoryBase):
self._transaction_collection = 'transactions'
self._failed_transaction_collection = 'transactions_failed'
self._duplicated_transaction_collection = "transactions_duplicated"
def delete_all_transactions(self):
collection = self._database[self._transaction_collection]
......@@ -58,3 +59,15 @@ class Repository(MongoRepositoryBase):
def delete_all_failed_transactions(self, use_case:str):
collection = self._database[self._failed_transaction_collection]
collection.delete_many({"ApplicationType": use_case})
def add_duplicated_transaction(self, transaction: Transaction):
#transaction["timestamp"] = time.time()
super().insert_entry(self._duplicated_transaction_collection, transaction.to_serializable_dict())
def all_duplicated_transactions_for_use_case(self, use_case: str) -> List[Dict]:
result = super().get_entries(self._duplicated_transaction_collection, projection={'_id': False}, selection={"use_case": use_case})
return [Transaction.from_serializable_dict(row) for row in list(result)]
def delete_all_duplicated_transactions(self, use_case:str):
collection = self._database[self._duplicated_transaction_collection]
collection.delete_many({"ApplicationType": use_case})
\ No newline at end of file
......@@ -174,6 +174,19 @@ class MessageHandler:
return
transaction = Transaction(use_case, target_table["name"], flattened)
#check for duplicates
try:
reference = self._mongo_repo.get_transaction_with_id(transaction.id())
if reference != None:
if (reference[0].table == transaction.table) and (reference[0].use_case == transaction.use_case):
LOGGER.error("Found duplicate")
self._mongo_repo.add_duplicated_transaction(transaction)
return
except ValueError as e:
LOGGER.error(f"{e}, could not insert duplicated node.")
return
try:
self._mongo_repo.add_transaction(transaction)
except ValueError as e:
......
......@@ -2,7 +2,10 @@ from messaging.rest_fetcher import RestFetcher
class DummyRestFetcher(RestFetcher):
def fetch_schema_information(self, use_case: str):
return [
returnList = []
if use_case == "string":
returnList =[
{
"name": "string",
"use_case": "string",
......@@ -10,5 +13,33 @@ class DummyRestFetcher(RestFetcher):
"UniqueID": "ResourceIds",
"RIds": "ResourceIds"
}
},
{
"name": "string2",
"use_case": "string",
"mappings": {
"UniqueID": "ResourceIds",
"RIds": "ResourceIds"
}
}
]
else:
returnList = [
{
"name": "string",
"use_case": "string2",
"mappings": {
"UniqueID": "ResourceIds",
"RIds": "ResourceIds"
}
},
{
"name": "string2",
"use_case": "string2",
"mappings": {
"UniqueID": "ResourceIds",
"RIds": "ResourceIds"
}
}
]
return returnList
\ No newline at end of file
......@@ -21,3 +21,10 @@ def all_failed_for_use_case(use_case: str):
def delete_all_failed_for_use_case(use_case: str):
_repository.delete_all_failed_transactions(use_case)
return Response(status=200)
def all_duplicated_for_use_case(use_case: str):
return _repository.all_duplicated_transactions_for_use_case(use_case)
def delete_all_duplicated_for_use_case(use_case: str):
_repository.delete_all_duplicated_transactions(use_case)
return Response(status=200)
\ No newline at end of file
......@@ -10,6 +10,7 @@ class DummyMongoRepo:
def __init__(self):
self.added_transactions = []
self.duplicated_transactions = []
def insert_trace(self, trace):
self.last_trace = trace
......@@ -17,6 +18,22 @@ class DummyMongoRepo:
def add_transaction(self, transaction):
self.added_transactions.append(transaction)
def get_transaction_with_id(self, unique_id: str):
result = []
for trans in self.added_transactions:
transID = trans.id()
if transID == unique_id:
result.append(trans)
if len(result) > 0:
return result
return None
def add_duplicated_transaction(self, transaction):
self.duplicated_transactions.append(transaction)
from messaging.DummyMessageManager import DummyMessageManager as DummyMessageSender
from messaging.dummy_rest_fetcher import DummyRestFetcher
......@@ -53,6 +70,48 @@ class Test_MessageHandler(unittest.TestCase):
}
}
return json.dumps(message_values)
def _get_valid_message2(self) -> str:
message_values = \
{ 'type': 'blockchain-transaction',
'content':
{
"ApplicationType": "string2",
"docType": "string",
"Metadata": {},
"ResourceIds": "string",
"ResourceMd5": "string",
"ResourceState": "string",
"Timestamp": "2019-08-27T14:00:48.587Z",
"TransactionFrom": "string",
"TransactionFromLatLng": "string",
"TransactionId": "string",
"TransactionTo": "string",
"TransactionToLatLng": "string",
"TransferredAsset": "string"
}
}
return json.dumps(message_values)
def _get_valid_message3(self) -> str:
message_values = \
{ 'type': 'blockchain-transaction',
'content':
{
"ApplicationType": "string",
"docType": "string2",
"Metadata": {},
"ResourceIds": "string",
"ResourceMd5": "string",
"ResourceState": "string",
"Timestamp": "2019-08-27T14:00:48.587Z",
"TransactionFrom": "string",
"TransactionFromLatLng": "string",
"TransactionId": "string",
"TransactionTo": "string",
"TransactionToLatLng": "string",
"TransferredAsset": "string"
}
}
return json.dumps(message_values)
def test_handleGeneric_emptyMessage_NotJsonError(self):
res = self.handler.handle_generic('')
......@@ -111,5 +170,32 @@ class Test_MessageHandler(unittest.TestCase):
self.assertEqual('semantic-linking', self.msg_sender.last_message['key'])
self.assertEqual('new-trace', json.loads(self.msg_sender.last_message['msg'])["type"])
def test_handleblockchain_duplicateTrace(self):
msg = self._get_valid_message()
msg2 = self._get_valid_message()
msg = eval(msg)
msg2 = eval(msg2)
self.handler.handle_blockchain_transaction(msg['content'])
self.handler.handle_blockchain_transaction(msg2['content'])
self.assertEqual(len(self.repo.added_transactions),len(self.repo.duplicated_transactions))
def test_handleblockchain_duplicateTraceDifferentTable(self):
msg = self._get_valid_message()
msg2 = self._get_valid_message2()
msg = eval(msg)
msg2 = eval(msg2)
self.handler.handle_blockchain_transaction(msg['content'])
self.handler.handle_blockchain_transaction(msg2['content'])
self.assertEqual(len(self.repo.added_transactions),2)
def test_handleblockchain_duplicateTraceDifferentUseCase(self):
msg = self._get_valid_message()
msg2 = self._get_valid_message3()
msg = eval(msg)
msg2 = eval(msg2)
self.handler.handle_blockchain_transaction(msg['content'])
self.handler.handle_blockchain_transaction(msg2['content'])
self.assertEqual(len(self.repo.added_transactions),2)
if __name__ == '__main__':
unittest.main()
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment