MS project server update resources for timesheets - rest

Since some SOAP operations were removed in Project Server 2016,
we are trying to replace the obsolete SOAP Statusing/UpdateStatus API call with the REST API call /Draft/Assignments('assignmentid') in order to assign resources and set the 'actualWork' property. The MSDN documentation says that we can send a MERGE or a PUT request to that URL but it doesn't mention what the request payload should look like.
Can you let me know what the JSON payload for this call should be?
MERGE _api/ProjectServer/Projects('projectid')/Draft/Assignments('assignmentid')
API documentation: https://msdn.microsoft.com/en-us/library/office/jj668054.aspx

replace things in < > with appropriate values for your data
1) CheckOut the project
POST <pwaUrl>/_api/projectserver/projects('<projectId>')/checkout
2) Add enterprise resource to project team
POST <pwaUrl>/_api/projectserver/projects('<projectId>')/draft/projectresources/addenterpriseresourcebyid('<enterpriseResourceId>')
3) Create the assignment to an existing task
POST <pwaUrl>/_api/projectserver/projects('<projectId>')/draft/assignments/add()
{ "parameters":{
"ResourceId":"<enterpriseResourceId>",
"TaskId":"<taskId>"
}
}
4) edit 1 or more assignment properties
PATCH <pwaUrl>/_api/projectserver/projects('<projectId>')/draft/assignments('<draftAssigmentId>')
{ "ActualWorkTimeSpan":"PT24H" }
5a) Publish & check-in:
POST <pwaUrl>/_api/projectserver/projects('<projectId>')/draft/publish(true)
5b) Or just Check-in (if you don’t want to publish):
POST <pwaUrl>/_api/projectserver/projects('<projectId>')/draft/checkin(false)

Related

IBM Maximo REST service POST not setting attributes on MBO

I have tried to create a record of my customized object through REST service in IBM Maximo.
The problem is that I created the record but I can't assign values to the attributes.
Next I will show what I did and what happened:
I have an Object Structure called oxidato that represents my customized object.
I did a POST using POSTMAN to this URL:
http://hostname:port/maximo/oslc/os/oxidato?lean=1
In the body section this is the JSON I was trying to send:
{
"attribute1":"205",
"attribute2":"206"
}
The record was created but none of the attributes was filled.
In my opinion, the REST service received the POST but can´t read the body.
What am I missing? I add an image of the POSTMAN as example:
EDIT1: I update the POST in order to use the newest API RES (Thanks Dex!)
EDIT2: I add an image of the header
I have found that Maximo will often ignore incoming attributes that aren't in the Maximo namespace (http://www.ibm.com/maximo). You could go through the trouble of setting up your VALOR1 and VALOR2 attributes to be in that namespace, but it's easier to just tell OSLC to ignore namespaces. You do that by setting the "lean" parameter to "1".
In your case, go to the "Params" tab and add an entry with a name of "lean". Give it a value of "1" and then send your POST again. You should see "?lean=1" appear at the end of the POST URL along the top there, but your body content should remain unchanged.
EDIT:
On the other hand, it looks like (based on your URL) that you aren't actually using the newer JSON/OSLC REST API; It looks like you are using the older REST services. This IBM page gives you a lot of information on the newer JSON REST API, including the correct URLs for it: https://developer.ibm.com/static/site-id/155/maximodev/restguide/Maximo_Nextgen_REST_API.html.
You should change your URL to /maximo/oslc/os/oxidato to use the newer API that naturally supports JSON and the lean parameter described above. This does required Maximo 7.6 to use though.
EDIT 2:
The attributes are often oddly case sensitive, requiring lowercase. Your example in your question of "attribute1" and "attribute2" are properly lowercase, but your screenshot shows uppercase attribute names. Try changing them to "valor1" and "valor2". Also, these are persistent attributes, right?
The response code received back (e.g. 200 - OK) and the response body will detail the record that was created.
I think you are correct in that the body of the post request is being ignored. Provided there are no required fields on the custom MBO your POST is probably creating an empty record with the next value in the sequence for the key field but you should see that in the response.
The following POST should create a record with values provided for attribute1 and attribute2 and provide a response with the record's identifier so that you can look it up in Maximo and show the values that were stored for attribute1 and attribute2:
http://hostname:port/maximo/rest/os/oxidato/?_format=json&_compact=1&attribute1=205&attribute2=206
Response: 200 OK
Reponse Body:
{ "CreateOXIDATOResponse": {
"rsStart": 0,
"rsCount": 1,
"rsTotal": 1,
"OXIDATOSet": {
"OXIDATO": {
"rowstamp": "[0 0 0 0 0 -43 127 13]",
"ATTRIBUTE1": "205",
"ATTRIBUTE2": "206",
"OXIDATOID": 13
}
} } }
You may also want to turn on debug logging for the REST interface in System Configuration -> Platform Configuration -> Logging for additional detail on what's happening in the log file.

Microfocus ALM OCTANE REST API - Get existing manual test's step details

I am trying to get existing manual test steps from ALM using the below REST API
https://almoctane-apj.saas.microfocus.com/api/shared_spaces/shared_space_id/workspaces/workspace_id/tests/manual_test_id/script
but I get the following result.
{
"creation_time": "2020-01-16T14:36:52Z",
"test_version": "{\"id\":1035,\"type\":\"test_version\"}",
"version_stamp": 5,
"last_modified": "2020-01-17T09:38:20Z",
"script": "- Open Browser\n- Type Username\n- Type PAssword\n- Submit\n- #2012 Call <ReqTest1>\n- Login using <Username> and <Password>\n- ?isLoginSuccesfull"
}
Is there a way to get existing manual test steps with details(like id, description, etc) through REST API?
i know that is six month late , but you could try calling entity test
http://URLdirection:PORT/api/shared_spaces//workspaces//tests?fields=id,latest_version&query=%22(id%3D%27yourTestId%27)%22
and once you have latest version you can call entity test_versions with attribute last_version that you got in the last request
http://URLdirection:PORT/api/shared_spaces//workspaces//test_versions?fields=id,script&query=%22(id%3D%27yourVersionID%27)%22
and there you will get the steps , also you have to consider that before doing this you need to have stablished connection (requested cookies, etc...) for avoiding 403 error and properly setted headers and parameters for the request. and if you're using microfocus library i didn't find any direct call for test_versions entity
EDIT: also you can request to http://URLdirection:PORT/api/shared_spaces//workspaces//test//script

QLIK Sense - REST api chain call

I need to integrate data in my Qlik Sense project using cloud REST api.
I need to call a chain of API as I firstly need the Token
Basically:
1) "Token" REST passing user+psw getting token
2) "API2" REST passing token received from 1 in the BODY
I think I need to use the data script feature, I'm able to create separately the 2 REST call, but how can I pass tokn dinamically in the Body?
Is there a specific code to be added?
Thx
Find an answer here:
https://community.qlikview.com/thread/224957
Basically just edit and parse Body variable:
let vRequestBody = '{"call":"ListarCategorias","app_key":"XXXXXXXX","app_secret":"XXXXXXXXXX","param":[{"pagina":"$(vPagina)","registros_por_pagina":100,"apenas_importado_api":"N"}]}';
let vRequestBody = replace(vRequestBody,'"', chr(34)&chr(34));
and use this at the end of "RestConnectorMasterTable" default scripting snippet WITH CONNECTION(BODY "$(vRequestBody)"):
RestConnectorMasterTable:
SQL SELECT
"__KEY_root",
(SELECT
"codigo",
"totalizadora",
"transferencia",
"__FK_categoria_cadastro"
FROM "categoria_cadastro" FK "__FK_categoria_cadastro")
FROM JSON (wrap on) "root" PK "__KEY_root"
WITH CONNECTION(BODY "$(vRequestBody)");

RESTful URLs for collection of objects

I have an entity Temperature.
My URLs are designed as follows:
GET /api/temperatures/new
GET /api/temperatures/{id}/edit
GET /api/temperatures
POST /api/temperatures
PUT /api/temperatures/{id}
DELETE /api/monitoring/temperatures/{id}
I would like to create multiple temperatures (a collection of temperatures) at once - are there any conventions in terms of the urls to use?
So far, I came up with the following:
POST /api/monitoring/temperatures/collection
GET /api/monitoring/temperatures/cnew
I thought there must be a convention for this already so would like to check with you.
GET /api/temperatures # Getting all resources
POST /api/temperatures # Create new resource
GET /api/temperatures/<id> # Get a single resource
PUT /api/temperatures/<id> # Edit all fields
PATCH /api/temperatures/<id> # Edit some fields
DELETE /api/temperatures/<id> # Delete a resource
These are the kinds of URL's Fielding describes in his thesis on REST. You shouldn't be describing what an end point does in the URL especially when used properly the HTTP verbs provide plenty of information. Be aware the REST architectural style has more to it than JSON over HTTP. Generic connectors, decoupling of components and a stateless server are key components of a RESTful application.
Note: Most people probably wouldn't implement both PUT and PATCH. PUT will be fine but I included it for completeness.
In response to your comment, if you are referring to creating multiple resources with one POST request you don't need a new URL. Your application should be able to handle {temp: 45, date: ...} and [{temp: 45, date: ...}, {temp: 50, date: ...}] at the same endpoint.
The HTTP method GET is not suitable for creating or editing resources - /api/temperatures/new and /api/temperatures/{id}/edit. HTTP GET is used for getting information without changing state in a server. You should use POST or PUT.
If you want to create multiple temperatures, you should use
POST /api/monitoring/temperatures
and consume JSON or XML list of objects.
Java example:
#POST
#Path("/temperatures")
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public Response postTemperatures(Temperatures temperatures){
// process and save
}
#XmlRootElement
public class Temperatures {
public List<Temperature> temperatures;
Temperatures(){}
}
You can update multiple entries with a single post by sending in an array of temperatures instead of a single entry,
POST /api/temperatures { [{...},{...}] }
but your api endpoint structure could be streamlined a little.
Ideally you want a simple consistent interface for all API resources.
I would personally simplify:
GET /api/temperatures/new
GET /api/temperatures/{id}/edit
GET /api/temperatures
POST /api/temperatures
PUT /api/temperatures/{id}
DELETE /api/monitoring/temperatures/{id}
to
GET /api/temperatures // Get all temperatures
POST /api/temperatures // Send in array of new entries to update
GET /api/temperatures/{id} // Read a specific temperature
PUT /api/temperatures/{id} // Update a specific temperature
DELETE /api/temperatures/{id} // Delete a specific temperature
This gives a consistent interface to the api for all temperature related calls that maps onto a CRUD interface.
Without context its hard to work out exactly what /api/temperatures/new is used for, but I would consider using a parameter on the call for finegraining the response.
e.g.
/api/temperatures?age=new // Get new temps
Which will allow you to use the common structure to add different types of criteria later on.

How can I get a list of all pull requests for a repo through the github API?

I want to obtain a list of all pull requests on a repo through the github API.
I've followed the instructions at http://developer.github.com/v3/pulls/ but when I query /repos/:owner/:repo/pulls it's consistently returning fewer pull requests than displayed on the website.
For example, when I query the torvalds/linux repo I get 9 open pull requests (there are 14 on the website). If I add ?state=closed I get a different set of 11 closed pull requests (the website shows around 20).
Does anyone know where this discrepancy arises, and if there's any way to get a complete list of pull requests for a repo through the API?
You can get all pull requests (closed, opened, merged) through the variable state.
Just set state=all in the GET query, like this->
https://api.github.com/repos/:owner/:repo/pulls?state=all
For more info: check the Parameters table at https://developer.github.com/v3/pulls/#list-pull-requests
Edit: As per Tomáš Votruba's comment:
the default value for, "per_page=30". The maximum is per_page=100. To get more than 100 results, you need to call it multiple itmes: "&page=1", "&page=2"...
PyGithub (https://github.com/PyGithub/PyGithub), a Python library to access the GitHub API v3, enables you to get paginated resources.
For example,
g = Github(login_or_token=$YOUR_TOKEN, per_page=100)
r = g.get_repo($REPO_NUMBER)
for pull in r.get_pulls('all'):
# You can access pulls
See the documentation (http://pygithub.readthedocs.io/en/latest/index.html).
With Github's new official CLI (command line interface):
gh pr list --repo OWNER/REPO
which would produce something like:
Showing 2 of 2 pull requests in OWNER/REPO
#62 Doing something that-weird-branch-name
#58 My PR title wasnt-inspired-branch
See additional details and options and installation instructions.
There is a way to get a complete list and you're doing it. What are you using to communicate with the API? I suspect you may not be doing something correctly. For example (there are only 13 open pull requests currently) using my API wrapper (github3.py) I get all of the open pull requests. An example of how to do it without my wrapper in python is:
import requests
r = requests.get('https://api.github.com/repos/torvalds/linux/pulls')
len(r.json()) == 13
and I can also get that result (vaguely) in cURL by counting the results myself: curl https://api.github.com/repos/torvalds/linux/pulls.
If you, however, run into a repository with more than 25 (or 30) pull requests that's an entirely different issue but most certainly it is not what you're encountering now.
If you want to retrieve all pull requests (commits, comments, issues etc) you have to use pagination.
https://developer.github.com/v3/#pagination
The GET request "pulls" will only return open pull-requests.
If you want to get all pull-requests either you do set the parameter state to all, or you use issues.
Extra information
If you need other data from Github, such as issues, then you can identify pull-requests from issues, and you can then retrieve each pull-request no matter if it is closed or open. It will also give you a couple of more attributes (mergeable, merged, merge-commit-sha, nr of commits etc)
If an issue is a pull-request, then it will contain that attribute. Otherwise, it is just an issue.
From the API: https://developer.github.com/v3/pulls/#labels-assignees-and-milestones
"Every pull request is an issue, but not every issue is a pull request. For this reason, “shared” actions for both features, like manipulating assignees, labels and milestones, are provided within the Issues API."
Edit I just found that issues behaves similar to pull-requests, so one would need to do retrieve all by setting the state parameter to all
You can also use GraphQL API v4 to request all pull requests for a repo. It requests all the pull requests by default if you don't specify the states field :
{
repository(name: "material-ui", owner: "mui-org") {
pullRequests(first: 100, orderBy: {field: CREATED_AT, direction: DESC}) {
totalCount
nodes {
title
state
author {
login
}
createdAt
}
}
}
}
Try it in the explorer
The search API shoul help: https://help.github.com/enterprise/2.2/user/articles/searching-issues/
q = repo:org/name is:pr ...
GitHub provides a "Link" header which specifies the previous, next and last URL to fetch the values.Eg, Link Header response,
<https://api.github.com/repos/:owner/:repo/pulls?state=all&page=2>; rel="next", <https://api.github.com/repos/:owner/:repo/pulls?state=all&page=15>; rel="last"
rel="next" suggests the next set of values.
Here's a snippet of Python code that retrieves information of all pull requests from a specific GitHub repository and parses it into a nice DataFrame:
import pandas as pd
organization = 'pvlib'
repository = 'pvlib-python'
state = 'all' # other options include 'closed' or 'open'
page = 1 # initialize page number to 1 (first page)
dfs = [] # create empty list to hold individual dataframes
# Note it is necessary to loop as each request retrieves maximum 30 entries
while True:
url = f"https://api.github.com/repos/{organization}/{repository}/pulls?" \
f"state={state}&page={page}"
dfi = pd.read_json(url)
if dfi.empty:
break
dfs.append(dfi) # add dataframe to list of dataframes
page += 1 # Advance onto the next page
df = pd.concat(dfs, axis='rows', ignore_index=True)
# Create a new column with usernames
df['username'] = pd.json_normalize(df['user'])['login']