OrientDB - HTTP API with JSON Param for content 'merge' - orientdb

I am using the http API (v2.2.9 CE) to try and insert / update using JSON payloads
commandJSON["command"] = " update Worker merge :workerJSON where userName = :userName"
commandJSON["parameters"] = {"workerJSON": worker, "userName": userName}
an example 'worker' payload I am using to update the city location only is:
{
"#class": "Worker",
"city": "Cornwell"
}
(note: username passed is fine)
"Error parsing query:\\u000a update Worker merge :workerJSON where
userName = :userName\\
I can run this using straight SQL in the studio but not via the REST api. There is some issue it would seem with the JSON payload.
I am using python, and the requests package FYI to POST.
Any ideas? Any suggestions on debugging serverside?

Related

How do I configure my Postman mock server response to return a date always two days in the past?

In my Postman Mock Server, I have set up a GET request to return JSON, in which the following is returned
“due_date":"2021-10-10"
What I would like is to adjust the response so that the date is returned is two days in the past. So if today is “2021-10-10”, I would like the response to contain
“due_date":"2021-10-08”
And if today is “2022-01-01”, I would like the response to contain
“due_date":"2021-12-30”
And so on. How do I set up my Postman mock server request to return such data?
I think it's a good question besides I'm curious so I made some research and found a workaround for this. It's a bit complex. I'm not sure worth it or not.
The first thing all, Postman Mock Server (in short Mock Server) cannot execute any test and pre-script so it is not capable to compute things. You need a calculation here so what are you gonna do? Well, you can define an environment for Mock Server which gives you the ability to use dynamic values in mock responses.
I will continue step by step to show the process.
1 - Open a Mock Server with an environment:
1.1 - Create a collection for the new Mock Server:
Your mock response will look like below:
{"due_date": "{{date}}"}
1.2 - Create an environment:
1.3 - Finish to create:
1.4 - When you finish, Postman creates a collection like below:
1.5 - You can test your Mock Server from this collection:
As you can see, Mock Server uses the environment variable in their response.
Now, We have to figure out how to update the environment variable.
You have to use an external service to update your environment variable. You can use Postman Monitor for this job because it can execute tests (means any code) and works like a CRON job which means you can set a Postman Monitor to update a specific environment variable every 24 hours.
2 - Open a Postman Monitor to update your environment:
2.1 - This step is pretty straightforward, create a Postman Monitor like the below configuration:
2.2 - Write a test to update the environment:
The test will look like below:
// you have to use pm.test() otherwise Postman Monitor not execute the test
const moment = require("moment");
pm.test("update date", () => {
// set date 2 days past
let startdate = moment();
const dayCount = 2;
startdate = startdate.subtract(dayCount, "days");
startdate = startdate.format("YYYY-MM-DD");
// this is not work on Postman Monitor, use Postman API like below
//pm.environment.set('date', startdate);
const data = JSON.stringify({
environment: {
values: [
{
key: "date",
value: startdate,
},
],
},
});
const environmentID = "<your-environment-id>";
// Set environment variable with Postman API
const postRequest = {
url: `https://api.getpostman.com/environments/${environmentID}`,
method: "PUT",
header: {
"Content-Type": "application/json",
"X-API-Key":
"<your-postman-api-key>",
},
body: {
mode: "raw",
raw: data,
},
};
pm.sendRequest(postRequest, (error, response) => {
console.log(error ? error : response.json());
// force to fail test if any error occours
if (error) pm.expect(true).to.equal(false);
});
});
You cannot change an environment variable with pm.environment when you using Postman Monitor. You should use Postman API with pm.sendRequest in your test.
You need to get a Postman API key and you need to learn your environment id. You can learn the environment id from Postman API.
To learn your Environment ID, use this endpoint: https://www.postman.com/postman/workspace/postman-public-workspace/request/12959542-b7ace502-4a5a-4f1c-8164-158811bbf236
To learn how to get a Postman API key: https://learning.postman.com/docs/developer/intro-api/#generating-a-postman-api-key
2.3 - Run Postman Monitor manually to make sure tests are working:
2.4 - As you can see Postman Monitor execute the script:
2.5 - When I check the environment, I can see the result:
You can test from browser to see results:
I have answered this question earlier but I have another solution.
You can deploy a server to update the variable from your mock environment. If you want to do it for free, just use Heroku.
I wrote a Flask app in Python and deploy it to Heroku, check below code:
from flask import Flask
import os
import json
import requests
from datetime import datetime, timedelta
app = Flask(__name__)
# the port randomly assigned and then mapped to port 80 by the Heroku
port = int(os.environ.get("PORT", 5000))
# debug mode
debug = False
#app.route('/')
def hello_world():
N_DAYS_AGO = 2
# calculate date
today = datetime.now()
n_days_ago = today - timedelta(days=N_DAYS_AGO)
n_days_ago_formatted = n_days_ago.strftime("%Y-%m-%d")
# set environment
payload = json.dumps({
"environment": {
"values": [
{
"key": "occupation",
"value": n_days_ago_formatted
}
]
}
})
postman_api_key = "<your-postman-api-key>"
headers = {
'Content-Type': 'application/json',
'X-API-Key': postman_api_key
}
environment_id = "<your-environment-id>"
url = "https://api.getpostman.com/environments/" + environment_id
r = requests.put(url, data=payload, headers=headers)
# return postman response
return r.content
if __name__ == '__main__':
app.run(debug=debug, host='0.0.0.0', port=port)
Code calculates the new date and sends it to Mock Environment. It worked, I tested it in Heroku before this answer.
When you go to your Heroku app's page the code will trigger and the date environment automatically will update, use the environment variable in your mock server to solve the problem.
You need to automate this code execution so I suggest you use UptimeRobot to ping your Heroku app 1 time a day. On every ping, your environment variable will update. Don't overuse it because Heroku has a usage quota for the free plan.
To use this code you need to learn how to deploy a Flask app on Heroku. By the way, Flask is just an option here, you can use NodeJS instead of Python, the logic will stay the same.

409 (request "Conflict") when creating second Endpoint connection in MongoDB Atlas using Terraform

I need to create many MongoDB Atlas endpoint connections using terraform.
I successfully create first, using this code:
#Private endpoint connection
resource "mongodbatlas_private_endpoint" "dbpe" {
project_id = var.prj_id
provider_name = "AWS"
region = var.aws_region
}
#AWS endpoint for secure connect to mongo db
resource "aws_vpc_endpoint" "ec2" {
vpc_id = var.sh_vpc
#service_name = "com.amazonaws.${var.aws_region}.ec2"
service_name = mongodbatlas_private_endpoint.dbpe.endpoint_service_name
vpc_endpoint_type = "Interface"
security_group_ids = [
aws_security_group.lb_sg.id,
]
subnet_ids = [
aws_subnet.subnet1.id,
var.sh_subnet
]
tags = {
"Name" = local.tname
}
#private_dns_enabled = true
}
But when I try to use this code second time in another folder (another tfstate) it failed cause error:
Error: error creating MongoDB Private Endpoints Connection: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/***/privateEndpoint: 409 (request "Conflict") A PrivateLink Endpoint Service already exists for AWS region US_EAST_2.
As I understand, a second "mongodbatlas_private_endpoint" "dbpe" trying to create another one Endpoint service. But, when I creating second Endpoint manually through WebUI, it using the same service like first Endpoint.
How I can tell to second Endpoint to use the existing service?
Or maybe it all wrong?
Please, help!
Thank you!
I found the solution.
Creating the "Endpoint Connection" really creates Endpoint only when you do it at first time. All of next times is creating an only association between Atlas endpoint and new AWS Endpoint.
In terraform I tried to create an Atlas endpoint second time and catch an error (because of limit - 1 endpoint per region). All I need to do - is create "Basic Endpoint" one time (by separate folder with own tfstate) and don't delete it. And for each new AWS endpoint need to create a new link from AWS Endpoint to "Basic". I do it by a terraform resource:
mongodbatlas_private_endpoint_interface_link
Resource "mongodbatlas_private_endpoint" is not need now. A "service_name" parameter in "aws_vpc_endpoint" you can hardcoded from "Basic" Endpoint. Use "output" to see mongodbatlas_private_endpoint.test.private_link_id - this is what you need.

Public URLs For Objects In Bluemix Object Storage Service

I would like to upload a number of photos to the Bluemix Object Storage service and then display them in a web app. Right now a GET request to the photo in the object storage container requires and auth token. Is there any way I can create a public URL to the object that would not require an auth token for a GET request?
I see there is an option of creating temporary URLs to objects but I don't want the URL to be temporary I want it to live forever. Is the only option to create a long lived temporary URL?
The correct way to do this is to modify the container ACL. You cannot do this via the Bluemix UI currently but you can using the Swift REST API. For example, to change the container ACL so anyone can read objects in the container you can issue the following PUT request.
curl -X PUT "https://dal.objectstorage.open.softlayer.com/v1/AUTH_123/mycontainer" \
-H "X-Auth-Token: token123" \
-H "X-Container-Read: .r:*"
I know this is a old post but with the help of Ryan Baxter and Object storage documentation in IBM I could resolve the Issue
Finally these too commands saved the day
First use swift and change access control of container
swift post container-name --read-acl ".r:*,.rlistings"
Next Using Curl Configure Container to a particular Url for accessing Files
curl -X GET " https://<access point>/<version>/AUTH_projectID/container-name" -H "X-Auth-Token:<auth token>" -H "X-Container-Read: .r:*,.rlistings"
And also very grateful for the help provided by Alex da Silva
Now BlueMix has an S3 endpoint capability. You can use curl or any other langage for exemple here is an boto3 that will upload an object, make it public and and some metadata :
( the function is using a json file on which you store the credentials and it uses 3 variables that are used in the global app : currentdirpath,ImagesToS3,ImageName )
def UploadImageDansBucket (currentdirpath,ImagesToS3,ImageName) :
currentdirpath = 'path/to/your/dir/current'
ImagesToS3 = ' /path/of/your/object/'
ImageName = 'Objectname'
with open("credentials.json", 'r') as f:
data = json.loads(f.read())
bucket_target = data["aws"]["targetBucket"]
print ('Open Connection to the bucket in the cloud..........')
s3ressource = boto3.resource(
service_name='s3',
endpoint_url= data["aws"]["hostEndPoint"],
aws_access_key_id= data["aws"]["idKey"],
aws_secret_access_key=data["aws"]["secretKey"],
use_ssl=True,
)
s3ressource.meta.client.meta.events.unregister('before-sign.s3', fix_s3_host)
s3ressource.Object(bucket_target, 'hello.txt').put(Body=b"I'm a test file")
s3ressource.Object(bucket_target, 'bin.txt').put(Body=b"0123456789abcdef"*10000)
fn = "%s%s" % (ImagesToS3,ImageName)
data = open(fn, 'rb')
#s3ressource.Bucket(bucket_target).put_object(Key=fn, Body=data)
now = datetime.datetime.now() # on recupere la date actuelle
timestamp = time.mktime(now.timetuple()) # on effectue la convertion
timestampstr = str (timestamp)
s3ressource.Bucket(bucket_target).upload_file(fn,ImageName, ExtraArgs={ "ACL": "public-read", "Metadata": {"METADATA1": "a" ,"METADATA2": "b","METADATA3": "c", "timestamp": timestampstr },},)

How to get all jobs status through spark REST API?

I am using spark 1.5.1 and I'd like to retrieve all jobs status through REST API.
I am getting correct result using /api/v1/applications/{appId}. But while accessing jobs /api/v1/applications/{appId}/jobs getting "no such app:{appID}" response.
How should I pass app ID here to retrieve jobs status of application using spark REST API?
Spark provides 4 hidden RESTFUL API
1) Submit the job - curl -X POST http://SPARK_MASTER_IP:6066/v1/submissions/create
2) To kill the job - curl -X POST http://SPARK_MASTER_IP:6066/v1/submissions/kill/driver-id
3) To check status if the job - curl http://SPARK_MASTER_IP:6066/v1/submissions/status/driver-id
4) Status of the Spark Cluster - http://SPARK_MASTER_IP:8080/json/
If you want to use another APIs you can try Livy , lucidworks
url - https://doc.lucidworks.com/fusion/3.0/Spark_ML/Spark-Getting-Started.html
This is supposed to work when accessing a live driver's API endpoints, but since you're using Spark 1.5.x I think you're running into SPARK-10531, a bug where the Spark Driver UI incorrectly mixes up application names and application ids. As a result, you have to use the application name in the REST API url, e.g.
http://localhost:4040/api/v1/applications/Spark%20shell/jobs
According to the JIRA ticket, this only affects the Spark Driver UI; application IDs should work as expected with the Spark History Server's API endpoints.
This is fixed in Spark 1.6.0, which should be released soon. If you want a workaround which should work on all Spark versions, though, then the following approach should work:
The api/v1/applications endpoint misreports job names as job ids, so you should be able to hit that endpoint, extract the id field (which is actually an application name), then use that to construct the URL for the current application's job list (note that the /applications endpoint will only ever return a single job in the Spark Driver UI, which is why this approach should be safe; due to this property, we don't have to worry about the non-uniqueness of application names). For example, in Spark 1.5.2 the /applications endpoint can return a response which contains a record like
{
id: "Spark shell",
name: "Spark shell",
attempts: [
{
startTime: "2015-09-10T06:38:21.528GMT",
endTime: "1969-12-31T23:59:59.999GMT",
sparkUser: "",
completed: false
}]
}
If you use the contents of this id field to construct the applications/<id>/jobs URL then your code should be future-proofed against upgrades to Spark 1.6.0, since the id field will begin reporting the proper IDs in Spark 1.6.0+.
For those who have this problem and are running on YARN:
According to the docs,
when running in YARN cluster mode, [app-id] will actually be [base-app-id]/[attempt-id], where [base-app-id] is the YARN application ID
So if your call to https://HOST:PORT/api/v1/applications/application_12345678_0123 returns something like
{
"id" : "application_12345678_0123",
"name" : "some_name",
"attempts" : [ {
"attemptId" : "1",
<...snip...>
} ]
}
you can get eg. jobs by calling
https://HOST:PORT/api/v1/applications/application_12345678_0123/1/jobs
(note the "1" before "/jobs").
If you want to use the REST API to control Spark, you're probably best adding the Spark Jobserver to your installation which then gives you a much more comprehensive REST API than the private REST APIs you're currently querying.
Poking around, I've managed to get the job status for a single application by running
curl http://127.0.0.1:4040/api/v1/applications/Spark%20shell/jobs/
which returned
[ {
"jobId" : 0,
"name" : "parquet at <console>:19",
"submissionTime" : "2015-12-21T10:46:02.682GMT",
"stageIds" : [ 0 ],
"status" : "RUNNING",
"numTasks" : 2,
"numActiveTasks" : 2,
"numCompletedTasks" : 0,
"numSkippedTasks" : 0,
"numFailedTasks" : 0,
"numActiveStages" : 1,
"numCompletedStages" : 0,
"numSkippedStages" : 0,
"numFailedStages" : 0 }]
Spark has some hidden RESTFUL API that you can try.
Note that i have not tried yet, but i will.
For example: to get status of submit application you can do:
curl http://spark-cluster-ip:6066/v1/submissions/status/driver-20151008145126-0000
Note: "driver-20151008145126-0000" is submitsionId.
You can take a deep look in this link with this post from arturmkrtchyan on GitHub

BAM 2.5.0 - Monitoring realtime traffic - Error when creating a new execution plan "Imported streams cannot be empty"

New user of the BAM with CEP integration, I'm currently following the "Monitoring Realtime Traffic" sample from WSo2 Doc and block when creating the Execution-plan step. Link to doc
The doc requires to
4. Under Import Stream select org.wso2.sample.rt.traffic for Import Stream, and enter traffic for As.
Unfortunatly when I click "import" nothing happens (in the doc it shows we should get //imported from org.wso2.sample.rt.traffic:1.0.0)
When I try to add the execution plan I get the "Imported streams cannot be empty"
Am I making a mistake ?
Regards
Vpl
I was able to solve the issue of this UI problem by creating the event stream directly into the registry table. For that I've created the following resource
/_system/governance/StreamDefinitions/org.wso2.sample.rt.traffic/1.0.0
containing
{
"streamId": "org.wso2.sample.rt.traffic:1.0.0",
"name": "org.wso2.sample.rt.traffic",
"version": "1.0.0",
"payloadData": [
{
"name": "entry",
"type": "STRING"
}
]
}
With a Media Type: application/json
Then creating the execution plan I could import the event stream and continue the use-case/ / tutorial
Regards