I know that every command that I enter in Kubernetes communicate with API.
Now I want to speak to API directly.
How can I find json format for every command?
I suggest using a client library if you talk from a programming language:
https://kubernetes.io/docs/reference/#api-client-libraries
Or use kubectl if you talk from CLI. Hardcoding API schemas will add you a maintenance burden. You're basically reimplementing the client in this case.
Following is the kubernetes API reference docs, you can find equivalent API for each resources here:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/
Hope this helps
Related
I am using skuber 2.4.0.
Is there a way to use the skuber client.delete an object in a specific namespace?
I understand that when I create a client I can specify a namespace.
But for example when creating a job you can specify any namespace in the metadata.
Seems weird to need to create a new client just for deletion.
I suggest you open an issue in the github repo for further help.
https://github.com/hagay3/skuber/issues
EDIT:
Namespaces API is now available in version 2.7.6
It's possible to provide namespace in every client endpoint.
https://github.com/hagay3/skuber/releases/tag/v2.7.6
OLD ANSWER:
Regarding your question, you can use the following method:
def usingNamespace(newNamespace: String): KubernetesClient
https://github.com/hagay3/skuber/blob/master/client/src/main/scala/skuber/api/client/KubernetesClient.scala#L366
example:
https://github.com/hagay3/skuber/blob/master/client/src/it/scala/skuber/NamespaceSpec.scala#L53
Currently, I'm developing a native application using React-Native. I've decided to go with AWS Amplify because of it's real time updates as well as its authentication.
I also have a Web Application that runs on a Node.js with Epxress server. This web application connects to a Mongo database.
My big problem is that I would like to have all of my aws amplify queries run to my existing MongoDb instead of a new dynamoDb database which is provided with AWS AppSync, but unfortunately I dont know where to start. This is especially helpful in adding authentication easily in my existing web application as well.
My first idea was to just create all my API endpoints in a new node js server and have app sync call to these API end points, but I'm not sure how to implement calling end points on an existing server (and this seems kind of counter intuitive to the 'serverless' idea)
My other idea came from this: Can AWS App-Sync be used without dynamoDB
This states to use AWS Lambda to 'pipeline' my data to the existing mongodb, but I'm not really sure what that entails.
TL;DR - I would like to be able to query an existing Mongodb instead of using DynamoDb when using AWS Amplify with AppSync.
I hope this is clear enough and doesn't sound like I'm rambling. Thanks in advance!
I would suggest using either an HTTP datasource to connect to your MongoDB backend or a Lambda function. Here are a couple getting started tutorials for both:
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-http-resolvers.html
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-lambda-resolvers.html
If you go the Lambda route, then you can leverage the new #function feature of the GraphQL Transformer in the Amplify CLI: https://aws-amplify.github.io/docs/cli/graphql#function
I wanted to automate API pentesting.
I referred this blog:
https://zaproxy.blogspot.in/2017/06/scanning-apis-with-zap.html
Could you direct me to where I can get a sample zap-options file that we pass with -z option to the zap-api-scan.py script, or where I can get documentation regarding the format in which config values has to be specified in the file? I could not find the official ZAP docs.
See this FAQ https://github.com/zaproxy/zaproxy/wiki/FAQconfigValues - thats the best we've got at the moment I'm afraid
When Spark is deployed in YARN cluster mode, how should I issue the Spark monitoring REST API calls http://spark.apache.org/docs/latest/monitoring.html ?
Does YARN have an API that takes the REST call for example (I already know the app-id)
http://localhost:4040/api/v1/applications/[app-id]/jobs
, proxies it to the correct driver port, and returns the JSON back to me? By "me" I mean my client.
Assume (or already by design) I cannot directly talk to the driver machine due to security reasons.
pls have a look at spark docs
- REST API
Yes with the latest api its available.
By this article
It turns out there is a third surprisingly easy option which is not documented. Spark has a hidden REST API which handles application submission, status checking and cancellation.
In addition to viewing the metrics in the UI, they are also available as JSON. This gives developers an easy way to create new visualizations and monitoring tools for Spark. The JSON is available for both running applications, and in the history server. The endpoints are mounted at /api/v1. Eg., for the history server, they would typically be accessible at http://:18080/api/v1, and for a running application, at http://localhost:4040/api/v1.
These are the other options available..
Livy jobserver
Submit Spark jobs remotely to an Apache Spark cluster Linux using Livy
Other options include
Triggering spark jobs with REST
This is what worked for me,
In yarn resource manager UI, click on link of the "application manager" for the running application and note the URL that it directs to
For me the link was something like
http://RM:20888/proxy/application_1547506848892_0002/
Append "api/v1/applications/application_1547506848892_0002" to the URL for the api.
For above case the api url is
curl "http://RM:20888/proxy/application_1547506848892_0002/api/v1/applications/application_1547506848892_0002"
I'm trying to retrieve projects metrics using the REST Api. Therefore I first query the projects using "/api/projects/index". Afterwards I retrieve the metrics using "/api/metrics/search". Both works fine. And I result with:
[id:35476, k:com.test:TestProject, nm:TestProject, qu:TRK, sc:PRJ]
[custom:false, description:Cyclomatic complexity, direction:-1, domain:Complexity, hidden:false, id:10019, key:complexity, name:Complexity, qualitative:false, type:INT]
Now I wanted to retrieve a projects metrics. Therefore I use the following URL:
https://MYHOST/sonarqube/api/timemachine/index?resource=35476&metric=10019&fromDateTime=2010-12-25T23:59:59+0100&toDateTime=2018-12-25T23:59:59+0100
There the server retruns only: [{"cols":[],"cells":[]}]
This surprices me, because when I enter the WebInterface of sonar for the project, I can see numbers. I tried some other metrics, however all ended with the same result. What am I doing wrong?
You didn't mention server version, so I'll assume the latest: 5.2.
I got the same result for a bare query (http://nemo.sonarqube.org/api/timemachine/index), and for a query which specified resource but not metrics (http://nemo.sonarqube.org/api/timemachine/index?resource=org.sonarsource.sonarqube%3Asonarqube).
So I'm guessing there's a problem with either your resource or metric id. Try using the keys (com.test&%3ATestProject, and complexity) instead.
And yes, the ids you got back from the other web services should work here, but what's meant by "id" can be a little... ah... variable from service to service to service.