I'm just starting off with LokiJS and I have one fundamental question I can't wrap my head around:
Is there a way to hardcode a LokiJS database? Or do I have do add all the data via Javascript?
It seems necessary to me, to have something like PHPMyAdmin to inspect/add/delete the actual data in the database, but I have found nothing so far, to do this with a LokiJS database. Isn't this a big loss in usability?
Lokijs.org
A fast, in-memory document-oriented datastore for node.js, browser and cordova.
Use cases:
To achieve database like feature on a limited resources (Raspberry Pi like) device.
To create secure applications without running an server.
To persist data and reuse it, for rich feature applications at client side.
..
The list can go on up 100 points.
Big loss in usability?
No.
Because it fulfills the purpose for which it is introduced. (In memory datastore)
Is there a way to hard-code a LokiJS database?
Yes.
Create a yourExampleDB.json or yourExampleDB.db file and insert data in a following manner.
{
"filename": "yourExampleDB.db",
"collections": [{
"name": "entries",
"data": [{
"id": 17948697,
"properties": { ... },
"meta": {
"revision": 0,
"created": 1524651771378,
"version": 0
},
"$loki": 1
}, {
"id": 17948705,
"properties": { ... },
"meta": {
"revision": 0,
"created": 1524651771378,
"version": 0
},
"$loki": 2
},
...
...
...
{
"id": 11699810,
"properties": { ... },
"meta": {
"revision": 0,
"created": 1524651771402,
"version": 0
},
"$loki": 11299
}],
"idIndex": [1, 2, ... 11298, 11299],
"binaryIndices": {},
"constraints": null,
"uniqueNames": [],
"transforms": {},
"objType": "entries",
"dirty": false,
"cachedIndex": null,
"cachedBinaryIndex": null,
"cachedData": null,
"adaptiveBinaryIndices": true,
"transactional": false,
"cloneObjects": false,
"cloneMethod": "parse-stringify",
"asyncListeners": false,
"disableMeta": false,
"disableChangesApi": true,
"disableDeltaChangesApi": true,
"autoupdate": false,
"serializableIndices": true,
"ttl": null,
"maxId": 11299,
"DynamicViews": [],
"events": {
"insert": [null],
"update": [null],
"pre-insert": [],
"pre-update": [],
"close": [],
"flushbuffer": [],
"error": [],
"delete": [null],
"warning": [null]
},
"changes": []
}, {
"name": "messages",
"data": [{
"txt": "I will only insert into this collection during databaseInitialize.",
"meta": {
"revision": 0,
"created": 1524651771378,
"version": 0
},
"$loki": 1
}],
"idIndex": [1],
"binaryIndices": {},
"constraints": null,
"uniqueNames": [],
"transforms": {},
"objType": "messages",
"dirty": false,
"cachedIndex": null,
"cachedBinaryIndex": null,
"cachedData": null,
"adaptiveBinaryIndices": true,
"transactional": false,
"cloneObjects": false,
"cloneMethod": "parse-stringify",
"asyncListeners": false,
"disableMeta": false,
"disableChangesApi": true,
"disableDeltaChangesApi": true,
"autoupdate": false,
"serializableIndices": true,
"ttl": null,
"maxId": 1,
"DynamicViews": [],
"events": {
"insert": [null],
"update": [null],
"pre-insert": [],
"pre-update": [],
"close": [],
"flushbuffer": [],
"error": [],
"delete": [null],
"warning": [null]
},
"changes": []
}],
"databaseVersion": 1.5,
"engineVersion": 1.5,
"autosave": false,
"autosaveInterval": 5000,
"autosaveHandle": null,
"throttledSaves": true,
"options": {
"serializationMethod": "normal",
"destructureDelimiter": "$<\n"
},
"persistenceMethod": "fs",
"persistenceAdapter": null,
"verbose": false,
"events": {
"init": [null],
"loaded": [],
"flushChanges": [],
"close": [],
"changes": [],
"warning": []
},
"ENV": "NODEJS"
}
It's a valid example copy of lokijs db.
something like PHPMyAdmin to inspect/add/delete?
No.
Or do I have do add all the data via JavaScript?
Yes
Steps to follow
Require lokijs module.
Create a db.
Create a collection.
Insert your data.
Persist it (Manual/Auto)
Hope it helps.
Related
I have this task created in my workflow:
"tasks": [
{
"name": "get_users_list",
"taskReferenceName": "get_users_list",
"inputParameters": {
"http_request": {
"uri": "https://reqres.in/api/users?page=${workflow.input.pagenumber}",
"method": "GET",
"contentType": "application/json",
"connectionTimeOut": "36000",
"readTimeOut": "36000"
}
},
"type": "HTTP",
"decisionCases": {},
"defaultCase": [],
"forkTasks": [],
"startDelay": 0,
"joinOn": [],
"optional": false,
"defaultExclusiveJoinTask": [],
"asyncComplete": false,
"loopOver": []
},
{
"name": "get_user_details",
"taskReferenceName": "get_user_details",
"inputParameters": {
"http_request": {
"uri": "https://reqres.in/api/users/${get_users_list.output.response.body.data[0].id}",
"method": "GET",
"contentType": "application/json",
"connectionTimeOut": "36000",
"readTimeOut": "36000"
}
},
"type": "HTTP",
"decisionCases": {},
"defaultCase": [],
"forkTasks": [],
"startDelay": 0,
"joinOn": [],
"optional": false,
"defaultExclusiveJoinTask": [],
"asyncComplete": false,
"loopOver": []
},
{
"name": "call_kafka",
"taskReferenceName": "call_kafka",
"inputParameters": {
"kafka_request": {
"topic": "transaction-1",
"value": "${get_user_details.output.response.body.data.first_name}",
"bootStrapServers": "kafka:9092"
},
"key": "",
"keySerializer": "org.apache.kafka.common.serialization.StringSerializer"
},
"type": "KAFKA_PUBLISH",
"decisionCases": {},
"defaultCase": [],
"forkTasks": [],
"startDelay": 0,
"joinOn": [],
"optional": false,
"defaultExclusiveJoinTask": [],
"asyncComplete": false,
"loopOver": []
}
],
When first two tasks are COMPLETED and the "call_kafka" task is getting failed with this error,
"Failed to invoke kafka task due to:
org.apache.kafka.common.KafkaException: Failed to construct kafka
producer"
I am new to Netflix Conductor and trying to publish a message to Kafka through workflow. Please correct me if anything wrong and suggest a solution for this issue. Thanks in advance
Kafka task has been moved to community repo.
Please check: https://github.com/Netflix/conductor/discussions/2961
Resolved by: Adding the below dependency in conductor/server/build.gradle file:
implementation com.netflix.conductor:conductor-kafka:3.8.0
I have 2 topics to receive data from API, those I can Successfully executed through code. Now I'm trying to execute through rest api using postman tool. now i'm getting InvalidRequestException. Before attempting request I fetched the external tasks using camunda get external-task api and my topics showing there.Later I tried to use /external-task/fetchAndLock API to send input variables.
External tasks response is:
http://localhost:8080/engine-rest/external-task
[
{
"activityId": "Activity_0jokenq",
"activityInstanceId": "Activity_0jokenq:0623e6f2-4837-11ec-8c7e-02426d005d3a",
"errorMessage": null,
"executionId": "0623e6f1-4837-11ec-8c7e-02426d005d3a",
"id": "0623e6f3-4837-11ec-8c7e-02426d005d3a",
"lockExpirationTime": null,
"processDefinitionId": "Process_0qcjqnm:1:da2ae20a-4836-11ec-8c7e-02426d005d3a",
"processDefinitionKey": "Process_0qcjqnm",
"processDefinitionVersionTag": null,
"processInstanceId": "0623bfdb-4837-11ec-8c7e-02426d005d3a",
"retries": null,
"suspended": false,
"workerId": null,
"topicName": "yvalue",
"tenantId": null,
"priority": 0,
"businessKey": null
},
{
"activityId": "Activity_1xxpyet",
"activityInstanceId": "Activity_1xxpyet:0623e6f6-4837-11ec-8c7e-02426d005d3a",
"errorMessage": null,
"executionId": "0623e6f5-4837-11ec-8c7e-02426d005d3a",
"id": "0623e6f7-4837-11ec-8c7e-02426d005d3a",
"lockExpirationTime": null,
"processDefinitionId": "Process_0qcjqnm:1:da2ae20a-4836-11ec-8c7e-02426d005d3a",
"processDefinitionKey": "Process_0qcjqnm",
"processDefinitionVersionTag": null,
"processInstanceId": "0623bfdb-4837-11ec-8c7e-02426d005d3a",
"retries": null,
"suspended": false,
"workerId": null,
"topicName": "testingtopic",
"tenantId": null,
"priority": 0,
"businessKey": null
}
]
my request is:
POST http://localhost:8080/engine-rest/external-task/fetchAndLock
{
"workerId": 1,
"maxTasks": 100,
"topics": [
{
"topicName": "testingtopic",
"lockDuration": 100000,
"variables": {
"a": {
"value": 1,
"type": "long"
},
"b": {
"value": 2,
"type": "long"
},
"id": {
"value": 1,
"type": "long"
}
}
}
],
"asyncResponseTimeout": 5
}
my BPMN diagram is:
Sorry mistake was mine I mentioned wrongly in request body. I mentioned
"variables": {}
But it's a array of json "variables": []
I mentioned here just variable names "variables": ["a","b","id"]
Later I used POST /external-task/{id}/complete request to pass with values to complete the process
We have a Azure DevOps Pipeline Release Definition, and i am looking at putting in place some automation for a new 'Stage' to be created from a template when a pull request is triggered on branch x, and the stage to be removed when the branch is deleted. i will be using github actions for this.
The API doc's are not super easy to follow.
My question are:
is this possible, dose the API support making such adding and removing of stage's to a _releaseDefinition ?
if so is there any examples on how to do this ?
Doc's
https://learn.microsoft.com/en-us/rest/api/azure/devops/release/releases/update%20release?view=azure-devops-rest-5.1#releasedefinitionshallowreference
The api you should use is Update-definition api:
PUT https://vsrm.dev.azure.com/{org name}/{project name}/_apis/release/definitions?api-version=5.1
For the request body of adding stage/removing stage, it in fact only made changes into environments parameter:
One stage definition corresponds to one grey code block.
Adding stage: Add a template JSON code block of stage definition(the grey one display in my left screenshots). This code structure is fixed.
Removing stage: Remove the complete corresponding stage definition.
Here is the one complete stage definition sample:
{
"id": -1,
"name": "Stage 3",
"rank": 3,
"variables": {},
"variableGroups": [],
"preDeployApprovals": {
"approvals": [
{
"rank": 1,
"isAutomated": true,
"isNotificationOn": false,
"id": 7
}
],
"approvalOptions": {
"requiredApproverCount": null,
"releaseCreatorCanBeApprover": false,
"autoTriggeredAndPreviousEnvironmentApprovedCanBeSkipped": false,
"enforceIdentityRevalidation": false,
"timeoutInMinutes": 0,
"executionOrder": "beforeGates"
}
},
"deployStep": {
"id": 8
},
"postDeployApprovals": {
"approvals": [
{
"rank": 1,
"isAutomated": true,
"isNotificationOn": false,
"id": 9
}
],
"approvalOptions": {
"requiredApproverCount": null,
"releaseCreatorCanBeApprover": false,
"autoTriggeredAndPreviousEnvironmentApprovedCanBeSkipped": false,
"enforceIdentityRevalidation": false,
"timeoutInMinutes": 0,
"executionOrder": "afterSuccessfulGates"
}
},
"deployPhases": [
{
"deploymentInput": {
"parallelExecution": {
"parallelExecutionType": "none"
},
"agentSpecification": {
"identifier": "vs2017-win2016"
},
"skipArtifactsDownload": false,
"artifactsDownloadInput": {
"downloadInputs": []
},
"queueId": 247,
"demands": [],
"enableAccessToken": false,
"timeoutInMinutes": 0,
"jobCancelTimeoutInMinutes": 1,
"condition": "succeeded()",
"overrideInputs": {}
},
"rank": 1,
"phaseType": "agentBasedDeployment",
"name": "Agent job",
"refName": null,
"workflowTasks": []
}
],
"environmentOptions": {
"emailNotificationType": "OnlyOnFailure",
"emailRecipients": "release.environment.owner;release.creator",
"skipArtifactsDownload": false,
"timeoutInMinutes": 0,
"enableAccessToken": false,
"publishDeploymentStatus": true,
"badgeEnabled": false,
"autoLinkWorkItems": false,
"pullRequestDeploymentEnabled": false
},
"demands": [],
"conditions": [],
"executionPolicy": {
"concurrencyCount": 1,
"queueDepthCount": 0
},
"schedules": [],
"owner": {
"displayName": "{user name}",
"id": "{user id}",
"isContainer": false,
"uniqueName": "{creator account}",
"url": "https://dev.azure.com/{org name}/"
},
"retentionPolicy": {
"daysToKeep": 30,
"releasesToKeep": 3,
"retainBuild": true
},
"processParameters": {},
"properties": {
"BoardsEnvironmentType": {
"$type": "System.String",
"$value": "unmapped"
},
"LinkBoardsWorkItems": {
"$type": "System.String",
"$value": "False"
}
},
"preDeploymentGates": {
"id": 0,
"gatesOptions": null,
"gates": []
},
"postDeploymentGates": {
"id": 0,
"gatesOptions": null,
"gates": []
},
"environmentTriggers": [],
"badgeUrl": "https://vsrm.dev.azure.com/{org}/_apis/{project name}/Release/badge/3c3xxx6414512/2/3"
},
Here has some key parameters you need pay attention: id, owner, rank and conditions.
id: This is the stage id you specified to stage, its value must less than 1. Any value that less than 1 is okay here.
owner: This is required. Or you will receive the message that notify you the stage must has owner.
rank: The natural numbers greater than 1. Here I suggest you increment it based on other stages.
conditions: This is the one which you can configure which stage the current new one will depend on. The nutshell is it used to specify stage execution location of release.
When you updating release, you must pack and set the whole release definition as request body. Get the original one, add new customized stage definition part into it. Then update to api.
In fact, I suggest you do adding a stage with UI for test. Then go History tab of release definition page. Then choose Compare difference of three dots.
We provided you the difference of definition in the panel(left panel is the original, the right is updated), and you can clearly get what you should focus on to apply your idea.
I have a list of Classifications & Sub-classifications in Apache Atlas. Want to delete them & create a new list.
All the other classifications are getting deleted but one of them with name "PII" giving following error when we select Delete Classification.
Error: Given type PII has references
When we do a search via Rest API using below URL:
http://ip.of.atlas:21000/api/atlas/v2/search/basic?classification=PII
Following Result comes:
{
"queryType": "BASIC",
"searchParameters": {
"classification": "PII",
"excludeDeletedEntities": false,
"includeClassificationAttributes": false,
"includeSubTypes": true,
"includeSubClassifications": true,
"limit": 100,
"offset": 0
},
"entities": [
{
"typeName": "hive_table",
"attributes": {
"owner": "nifi",
"createTime": 1557832055000,
"qualifiedName": "demo.test_table#demopilot",
"name": "test_table"
},
"guid": "ecb7bb24-bdde-448c-b718-07273e5ce572",
"status": "DELETED",
"displayText": "test_table",
"classificationNames": [
"PII"
],
"meaningNames": [],
"meanings": []
},
{
"typeName": "hive_table",
"attributes": {
"owner": "nifi",
"createTime": 1557832055000,
"qualifiedName": "demo.test_table#demopilot",
"name": "test_table"
},
"guid": "ed5a9284-c290-4431-ab76-27b820478e29",
"status": "DELETED",
"displayText": "test_table",
"classificationNames": [
"PII"
],
"meaningNames": [],
"meanings": []
},
{
"typeName": "hive_column",
"attributes": {
"owner": "nifi",
"qualifiedName": "demo.test_table.traffic_case#demopilot",
"name": "traffic_case"
},
"guid": "73f75a6c-9f4e-41f0-b0ef-6c05ca132639",
"status": "DELETED",
"displayText": "traffic_case",
"classificationNames": [
"PII"
],
"meaningNames": [],
"meanings": []
}
]
}
Questions:
1. Is there a API which help to delete all Classifications irrespective of whether they are attached to Entity or not?
2. Delete Single Classification forcefully with Classification Name or GUID?
Running below GET request:
http://ip.of.atlas:21000/api/atlas/v2/types/typedefs
& then Delete the guid attached to the typedefs
I tested it out and you could use below API to delete a tag:
curl -k -X DELETE --insecure --negotiate -u : --header \
''{"classificationDefs":[{"name":"PII","superTypes":[],"attributeDefs":[]}]}' \
'https://atlas-host:21443/api/atlas/v2/types/typedefs?type=classification'
I have the following json output string:
{
"meta": {
"limit": 20,
"next": null,
"offset": 0,
"previous": null,
"total_count": 1
},
"objects": [{
"bcontext": "/api/v2.0/buildercontext/2/",
"bugs": [],
"build": {
"bldtype": "obj",
"branch": "main",
"buildstatus": [{
"build": "/api/v2.0/build/2140634/",
"failurereason": "_checkfailures (seen: FAIL - /testrun/18647678/ - area[4769] AIM-SANITY)",
"id": "1294397",
"lastupdate": "2015-03-31T14:30:18",
"overridden": false,
"overridedesc": "",
"overrideuser": null,
"recommended": false,
"resource_uri": "/api/v2.0/buildstatus/1294397/",
"slatype": {
"id": "26",
"name": "VA_Bats",
"resource_uri": "/api/v2.0/sla/26/"
}
}],
"changeset": "494625",
"coverage": false,
"deliverables": ["/api/v2.0/deliverable/4296455/", "/api/v2.0/deliverable/4296956/", "/api/v2.0/deliverable/4296959/", "/api/v2.0/deliverable/4296986/", "/api/v2.0/deliverable/4296992/", "/api/v2.0/deliverable/4296995/", "/api/v2.0/deliverable/4297034/", "/api/v2.0/deliverable/4297058/"],
"git_host": null,
"git_repo": null,
"id": "2140634",
"p4host": {
"id": "10",
"p4port": "perforce-rhino.eng.com:1800",
"p4weburl": "http://p4web.eng.com:1800",
"resource_uri": "/api/v2.0/perforceserver/10/"
},
"resource_uri": "/api/v2.0/build/2140634/",
"site": "/api/v2.0/site/25/",
"site_name": "mbu",
"slastested": ["/api/v2.0/sla/26/"],
"submit_time": "2015-03-31T05:40:21",
"submit_user": "haharonof"
},
"builder": "/api/v2.0/builder/1423/",
"clean": true,
"componentbuilds": "vcops-vsphere-solution-pak=sb-5242047,vrops=sb-5242013,vscm=sb-5242025,vsutilities=sb-5242029;parentbuilder=1410",
"deleted": false,
"endtime": "2015-03-31T06:20:58",
"helpzillas": [],
"id": "4296956",
"location": {
"httpserver": "sc-prd-cat-services001.eng.com",
"id": "1",
"name": "PA",
"nfsserver": "cat-results.eng.com",
"pxedir": "/mts/builder-pxe",
"resource_uri": "/api/v2.0/location/1/",
"resultspath": "/results"
},
"nfsserver": "build-storage60",
"p4client": "vmktestdevnanny-builder-1423",
"path": "/storage60/release/sb-5242148",
"ready": true,
"resource_uri": "/api/v2.0/deliverable/4296956/",
"result": "PASS",
"sbbuildid": 5242148,
"sbjobid": 5242148,
"sbuser": "arajamanickam",
"starttime": "2015-03-31T06:16:50",
"targetchangeset": "494625",
"targets": "vcopssuitevm",
"triagetime": null,
"vmodl": null
}]
}
I want to get sbbuildid using powershell. How can I get this?
By converting your json to an object, using the ConvertFrom-Json cmdlet (assuming $jsonString contains the json above):
$jsonObj = $jsonString | ConvertFrom-Json
$jsonObj.objects.sbbuildid
$sb_build_id = $build_info.Substring($build_info.IndexOf("sbbuildid") + 11, 8).trim()
Put whole string in $build_info