In Hadoop2, is it possible to use the rest API to get the same result as:
yarn logs -applicationId <application ID>
This is a pain and I don't have a happy answer, but I can point you to some resources.
The YARN CLI dumps logs by going to the file system. If your application can access HDFS, it can do the same thing (but it's not simple).
Alternatively, you can get the application master log URL (but not the log contents) using the rest call http:///ws/v1/cluster/apps/{appid}. From this URL, you can fetch an HTML page with the log contents, which will be returned in a <pre> tag with encoded HTML entities (& etc).
If your application is still running, you can get the raw logs using the container id from the above URL: http:///ws/v1/node/containerlogs/{containerId}/stdout. This was implemented as part of YARN-649.
YARN-1264 looks like exactly what you want, but was Closed-Duplicate with the above JIRA. That wasn't quite accurate, since the CLI and web page can get the logs after the container is finished, but the REST service above can't.
Related
I have a Message Hub instance on Bluemix, and am able to produce / consume messages off it. I was looking for a quick, reasonable way to browse topics / messages to see what's going on. Something along the lines of kafka-topics-ui.
I installed kafka-topics-ui locally, but could not get it to connect to Message Hub. I used the kafka-rest-url value from the MessageHub credentials in the kafka-topics-ui configuration file (env.js), but could not figure out where to provide the API key.
Alternatively, in the Bluemix UI, under Kibana, I can see log entries for creating the topic. Unfortunately, I could not see log entries for messages in the topic (perhaps I'm looking the wrong place or have wrong filters?).
My guess is I'm missing something basic. Is there a way to either:
configure a tool such as kafka-topics-ui to connect to MessageHub,
or,
browse topic messages easily?
Cheers.
According to Using the Kafka REST API on Bluemix you need an additional header in all API requests:
-H "X-Auth-Token: APIKEY"
A quick solution is to edit the topic-ui code and include your token in every request. Another solution would be to use a Chrome plugin that can inject the above header. For a more formal solution, i have opened a ticket on github
When Spark is deployed in YARN cluster mode, how should I issue the Spark monitoring REST API calls http://spark.apache.org/docs/latest/monitoring.html ?
Does YARN have an API that takes the REST call for example (I already know the app-id)
http://localhost:4040/api/v1/applications/[app-id]/jobs
, proxies it to the correct driver port, and returns the JSON back to me? By "me" I mean my client.
Assume (or already by design) I cannot directly talk to the driver machine due to security reasons.
pls have a look at spark docs
- REST API
Yes with the latest api its available.
By this article
It turns out there is a third surprisingly easy option which is not documented. Spark has a hidden REST API which handles application submission, status checking and cancellation.
In addition to viewing the metrics in the UI, they are also available as JSON. This gives developers an easy way to create new visualizations and monitoring tools for Spark. The JSON is available for both running applications, and in the history server. The endpoints are mounted at /api/v1. Eg., for the history server, they would typically be accessible at http://:18080/api/v1, and for a running application, at http://localhost:4040/api/v1.
These are the other options available..
Livy jobserver
Submit Spark jobs remotely to an Apache Spark cluster Linux using Livy
Other options include
Triggering spark jobs with REST
This is what worked for me,
In yarn resource manager UI, click on link of the "application manager" for the running application and note the URL that it directs to
For me the link was something like
http://RM:20888/proxy/application_1547506848892_0002/
Append "api/v1/applications/application_1547506848892_0002" to the URL for the api.
For above case the api url is
curl "http://RM:20888/proxy/application_1547506848892_0002/api/v1/applications/application_1547506848892_0002"
I'm trying to retrieve projects metrics using the REST Api. Therefore I first query the projects using "/api/projects/index". Afterwards I retrieve the metrics using "/api/metrics/search". Both works fine. And I result with:
[id:35476, k:com.test:TestProject, nm:TestProject, qu:TRK, sc:PRJ]
[custom:false, description:Cyclomatic complexity, direction:-1, domain:Complexity, hidden:false, id:10019, key:complexity, name:Complexity, qualitative:false, type:INT]
Now I wanted to retrieve a projects metrics. Therefore I use the following URL:
https://MYHOST/sonarqube/api/timemachine/index?resource=35476&metric=10019&fromDateTime=2010-12-25T23:59:59+0100&toDateTime=2018-12-25T23:59:59+0100
There the server retruns only: [{"cols":[],"cells":[]}]
This surprices me, because when I enter the WebInterface of sonar for the project, I can see numbers. I tried some other metrics, however all ended with the same result. What am I doing wrong?
You didn't mention server version, so I'll assume the latest: 5.2.
I got the same result for a bare query (http://nemo.sonarqube.org/api/timemachine/index), and for a query which specified resource but not metrics (http://nemo.sonarqube.org/api/timemachine/index?resource=org.sonarsource.sonarqube%3Asonarqube).
So I'm guessing there's a problem with either your resource or metric id. Try using the keys (com.test&%3ATestProject, and complexity) instead.
And yes, the ids you got back from the other web services should work here, but what's meant by "id" can be a little... ah... variable from service to service to service.
I found plenty of information and example about triggering TeamCity 8.1.2 backups via the REST API.
But leaving the backup files on the same server is pretty useless for disaster recovery.
So I'm looking for a way to copy over the generated backup file to another location.
My question is about finding the name of the latest available backup file via the REST API -
The Web GUI includes this information under "Last Backup Report" under the "Backup" page of the Server Administration.
I've dug through https://confluence.jetbrains.com/display/TCD8/REST+API#RESTAPI-DataBackup and the /httpAuth/app/rest/application.wadl on my server. I didn't find any mention of a way to get this info through the REST API.
I also managed to trigger a backup with a hope that perhaps the response gives this information, but it's not there - the response body is empty and the headers don't include this info.
Right now I intend to fetch the HTML page and extract this information from there, but this feels very hackish and fragile (the structure of the web page could change any time).
Is there a recommended way to get this information automatically?
Thanks.
JetBrains support got back to me with the right answer - I should use a POST method, not GET, even if the request body is empty.
Here is an example of a working request:
curl -u user:password --request POST http://localhost:8111/httpAuth/app/rest/server/backup?includeConfigs=true'&'includeDatabase=true'&'fileName=testBackup
And the response to that contains a plain file name in text: testBackup_20150108_141924.zip
I have a link on my website to the standard publish page generated by Visual Studio. My concern is that if anybody finds out the URL to that page, they can download my software. Sure, I could password protect the page with the link, but it still would not be protecting the download URL. Are there any ways to secure the click once upload? I have looked around, and it seems like I am stuck in this sense.
Public URL is a security issue in ClickOnce Deployment. However, there is a solution for your problem if your web server has windows and .NET installed. Tell me if you have one ? I will have to come up with another workaround for Linux web server in case you have that.
Brief
Firstly, a bit of information about ClickOnce deployment. When you deploy the application, the GET requests on the server made are (assuming WebDir is the publish directory on the server)
G-1. GET /WebDir/setup.exe (Initial download)
G-2. GET /WebDir/MyApp.Application (setup.exe -url request)
G-3. GET /WebDir/MyApp.Application (.application deployment provider URL request)
G-4. GET /WebDir/Application Files/MyApp_1_0_0_0/MyApp.exe.manifest (Application manifest request)
G-5. GET /WebDir/Application Files/MyApp_1_0_0_0/MyApp.exe.deployand other .deploy files ... (Application file requests)
Implementation
Now, the solution is to intercept these file requests on the server. On IIS, you can attach a custom HTTPHandler and handle the request. On Apache, you can redirect requests to a PHP code using .htaccess files. Apart from this, you will have to generate unique identifier uid for client instances downloaded from the server (can be your license key) and put that in the deployment provider URL query parameters.
Directory Structure
Create an "Application" folder inside your WebDir and restrict access to /WebDir/Application/. Rest everything can be there inside /WebDir/
File Requests
So here's what you do on a Apache web server hosted on a windows machine:
Create a custom download page or use the one created from publishing the application using Visual Studio (but you will have to edit it manually!). Let's assume that page is /WebDir/Download.php
After authenticating user from Download.php, you have to send setup.exe from your code (can do it with readfile() in PHP) to the user. However, the catch is bootstrapper (setup.exe) after installing will do a GET request [G-2]. Don't forget now, that you have to validate this file request. So basically you change the "setup.exe -url" property to include uid before returning the file. For eg: change it to /WebDir/uid/MyApp.Application [G-2]. You can use MsiStuff.exe to change the URL property for the bootstrapper.
Using a .htaccess file, rewrite [G-2] to /WebDir/Handler.php?user=uid. From Handler.php, you can check if it is a valid uid. If it is valid, you will have to include the uid in the deployment provider URL and "Dependent Assemblies Path" in deployment manifest so that if an upgrade request comes (It essentially requests the deployment manifest), you can validate the user there too. Add uid to query string parameters. For eg: change it to /WebDir/MyApp.application?user=uid [G-3]. Don't forget that you will have to resign the manifests once you modify them. Use Mage or write your own code to do that.
So finally, the GET requests on the server will be (assuming uid=1f3rd)
G-1. GET /WebDir/Download.phpAction: return setup.exe with the -url changed
G-2. GET /WebDir/Application/setup.exe/1f3rd/MyApp.ApplicationAction: redirect, validate user, change URL, re-sign and return file
G-3. GET /WebDir/Application/setup.exe/MyApp.Application?user=1f3rdAction: redirect, validate user and return file
G-4. GET /WebDir/Application/1f3rd/Application Files/MyApp_1_0_0_0/MyApp.exe.manifestAction: redirect, validate user and return file
G-5. GET /WebDir/Application/1f3rd/Application Files/MyApp_1_0_0_0/MyApp.exe.deployand other .deploy files ...Action: redirect, validate user and return file
Pros
Application is successfully deployed and upgraded only if all the requests have a valid uid in the URL present.
You can now identify different instances of application on client systems. You can track the update history, do a selective version upgrade/downgrade and much more !
Cons
You will need a windows server to implement the above since you need mage.exe | your-own-.NET-code-signing-application and Msistuff.exe.
You may have minor performance issues since you are performing validation on every file request. You can choose to skip validation on .manifest and .deploy file requests.
You will have to ensure proper security for companies certificate which will be present on the web server for signing (You can store it on the server local file-system if you have the full server to yourself. In that case, it is fine unless somebody breaks into machine itself !)
If you want me to make something clear or explain in detail, feel free to ask. In case you have suggestions for modification to the above, post that too.
I will write a detailed CodeProject article if I have spare time someday.