Its possible add User Roles on Wordpress by API REST? - wordpress-rest-api

Im adding users with JSON but i need to create user roles too for this specific users by API REST, this is poossible ?
So much thanks.

It is possible, by performing a GET request to “wp-json/wp/v2/users”
You can define it using the below example:
{
"username" : "rockstar",
"first_name" : "Seace",
"last_name" : "Kaka",
"email" : "myuser#example.com",
"password" : "Password123!",
"roles" : ["your_role"]
}
To verify the obvious details, just review the documentation:
https://developer.wordpress.org/rest-api/reference/users/#create-a-user

Related

When creating a REST resource, must the representation of that resource be used?

As an example imagine a dynamic pricing system which you can ask for offers moving boxes from 1 place to another. Since the price does not exist before you ask for it it's not as simple as retrieving data from some data-base, but an actual search process needs to run.
What I often see in such scenario's is a request/response based endpoint:
POST /api/offers
{
"customerId" : "123",
"origin" : {"city" : "Amsterdam"},
"destination" : {"city" : "New York"},
"boxes": [{"weight" : 100},{"weight": "200"}]
}
201:
{
"id" : "offerId_123"
"product" : {
"id" : "product_abc",
"name": "box-moving"
}
"totalPrice" : 123.43
}
The request has nothing to do with the response except that one is required to find all information for the other.
The way I interpret "manipulation of resources through representations" I think that this also goes for creation. Following that I would say that one should create the process of searching in stead:
POST /api/offer-searches
{
"request" : {
"customerId" : "123",
"origin" : {"city" : "Amsterdam"},
"destination" : {"city" : "New York"},
"boxes": [{"weight" : 100},{"weight": "200"}]
}
}
201:
{
"id" : "offerSearch_123"
"request" : {
"customerId" : "123",
"origin" : {"city" : "Amsterdam"},
"destination" : {"city" : "New York"},
"boxes": [{"weight" : 100},{"weight": "200"}]
}
offers: [
"id" : "offerId_123"
"product" : {
"id" : "product_abc",
"name": "box-moving"
}
"totalPrice" : 123.43
]
}
Here the request and the response are the same object, during the process it's enhanced with results, but both are still a representation of the same thing, the search process.
This has the advantage of being able to "track" the process, by identifying it it can be read again later. You could still have /api/offers/offerId_123 return the created offer to not have to go through the clutter of the search resource. But it also has quite the trade-off: complexity.
Now my question is, is this first, more RPC like approach something we can even call REST? Or to comply to REST constraints should the 2nd approach be used?
Now my question is, is this first, more RPC like approach something we can even call REST? Or to comply to REST constraints should the 2nd approach be used?
How does the approach compare to how we do things on the web?
For the most part, sending information to a server is realized using HTML forms. So we are dealing with a lot of requests that look something like
POST /efc913bf-ac21-4bf4-8080-467ca8e3e656
Content-Type: application/x-www-form-urlencoded
a=b&c=d
and the responses then look like
201 Created
Location: /a2596478-624f-4775-a490-09edb551a929
Content-Location: /a2596478-624f-4775-a490-09edb551a929
Content-Type: text/html
<html>....</html>
In other words, it's perfectly normal that (a) the representations of the resource are not the information that was sent to the server, but intead something the server computed from the information it was sent and (b) not necessarily of the same schema as the payload of the request... not necessarily even the same media type.
In an anemic document store, you are more likely to be using PUT or PATCH. PATCH requests normally have a patch document in the request-body, so you would expect the representations to be different (think application/json-patch+json). But even in the case of PUT, the server is permitted to make changes to the representation when creating its resource (to make it consistent with its own constraints).
And of course, when you are dealing with responses that contain a representation of "the action", or representations of errors, then once again the response may be quite dissimilar from the request.
TL;DR REST doesn't care if the representation of a bid "matches" the representation of the RFP.
You might decided its a good idea anyway - but it isn't necessary to satisfy REST's constraints, or the semantics of HTTP.

How do I properly expose my public signing key for CAS generated JWT tickets

UPDATE:
After further testing it seems as though the RSA setup I thought was working, actually isn't. Until CAS can support asymmetric keys for JWT tickets, this question is rendered irrelevant.
My use case:
CAS VERSION: 6.2.0-RC2
Using CAS for single sign on for a number of applications. The backend identity provider is LDAP. The client of interest is an SPA that redirects to CAS for login. Upon successful login to CAS, a JWT is issued via a configured service provider. I have setup the service provider to sign the JWT using asymmetric keys via RSA. This is all working. What I can't get to work is the "jwtTicketSigningPublicKey" actuator endpoint.
I want to be able to publish the public key so that my SPA is able to dynamically grab the public key for signage validation so that I can rotate the RSA keys if necessary without having to change anything on the SPA side. I assumed this was the purpose for this feature, but when I hit the endpoint after exposing it as directed here, I get a 404.
My config:
Here is what my cas.config file looks like as it relates to this endpoint:
# Expose it
management.endpoints.web.exposure.include=jwtTicketSigningPublicKey
# Enable it
management.endpoint.jwtTicketSigningPublicKey.enabled=true
# Allow access to it
cas.monitor.endpoints.endpoint.jwtTicketSigningPublicKey.access=ANONYMOUS
I then bounce the CAS server and I can see the endpoint in the actuator links at http://mycas.com/cas/actuator like so:
"jwtTicketSigningPublicKey":{"href":"http://mycas.com/cas/actuator/jwtTicketSigningPublicKey","templated":false}
As the document refers to, I can pass an optional service parameter like so to get the public key associated to a "per-service" implementation, which is what I have. I hit the endpoint like so:
http://mycas.com/cas/actuator/jwtTicketSigningPublicKey?service=http://example.org
At which point I receive a 404. I also get a 404 if I hit the endpoint without the service parameter. But I would expect that since I don't actually have a globally defined RSA pair.
My attempt at a solution:
The most logical place I can imagine this public key should be provided is in the service configuration along with where I am providing the private key. However I can find no documented parameter by which to define the public key. This is what I have tried to no avail.
{
"#class" : "org.apereo.cas.services.RegexRegisteredService",
"serviceId" : "^http://.*",
"name" : "Sample",
"id" : 10,
"properties" : {
"#class" : "java.util.HashMap",
"jwtAsServiceTicket" : {
"#class" : "org.apereo.cas.services.DefaultRegisteredServiceProperty",
"values" : [ "java.util.HashSet", [ "true" ] ]
},
"jwtSigningSecretAlg" : {
"#class" : "org.apereo.cas.services.DefaultRegisteredServiceProperty",
"values" : [ "java.util.HashSet", [ "RS256" ] ]
},
"jwtAsServiceTicketSigningKey" : {
"#class" : "org.apereo.cas.services.DefaultRegisteredServiceProperty",
"values" : [ "java.util.HashSet", [ "MyPrivateKeyGoesHere" ] ]
},
"jwtAsServiceTicketSigningPublicKey" : {
"#class" : "org.apereo.cas.services.DefaultRegisteredServiceProperty",
"values" : [ "java.util.HashSet", [ "MyPublicKeyGoesHere" ] ]
}
}
}
The signing key works and is a documented parameter. Also, the signing secret algorithm is documented here. But the last "...SigningPublicKey" parameter was a complete shot in the dark because I have not found any docs on the matter other than what is defined here.
Summary:
So what I am hoping to find by this question, is someone that is familiar with this endpoint and how to configure it properly in order to make the signing public key available to my SPA.

Not persistent HTTPCookieStorage with GroupContainerIdentifier

I set up a HTTPCookieStorage like this:
let storage = HTTPCookieStorage.sharedCookieStorage(forGroupContainerIdentifier: "user100")
storage.cookieAcceptPolicy = .always
let cookieProperties: [HTTPCookiePropertyKey : Any] = [.name : "example\(Date().timeIntervalSince1970)",
.value : "value\(Date().timeIntervalSince1970)",
.domain : "www.example\([100,200,300].randomElement()!).com",
.originURL : "www.example.com",
.path : "/",
.version : "0",
.expires : Date().addingTimeInterval(2629743)
]
storage.setCookie(HTTPCookie(properties: cookieProperties)!)
I found out that doing the same for HTTPCookieStorage.shared actually saves the cookies, this custom HTTPCookieStorage is not. How to make it persistent?
Here is my finding, the purpose of forGroupContainerIdentifier cookies is to share cookies across your applications. Like in one app you create one group for cookies storage and in another application, you want to access that group so for that purpose you need to choose the right name of a group. You need to create group on app portal developer site and need to add that group in your both application bundle ids. after that, you will be able to use those cookies. For more information please check this thread. Cookies storage

Firebase Chat - Notification to Other User

I am currently building an app using Firebase, and decided to implement a chat as well.
I was able to use JSQMessagesVC as a GUI, and get the Firebase chat aspect working as well (by combining 2 UID's to create a chatroom, ex: /123_456). However, I am lost on how to notify the other user if they have received a message. (If user 123 opens chatroom 123_456 and sends a message in it, how do I notify user 456 that they have received a message?)
Thanks for the help!
Your question is more related to designing your database. In case implementing chat functionalities you need to rethink about your database structure again. Its all about database structure as Firebase doesn't provide you any trigger so that you can do some actions on other nodes (i.e. database tables) with your primary action in the node you're in.
Though you might've read all those tutorials already, you can take a look again anyway about structuring your data
Here's a nice chat example which might help you in your case. Though its referring to a group chat. You might take a look at how the database is structured for this purpose.
Basically, you need to put some extra actions from client side in different nodes when someone opens a room to chat with others.
Oh here's another SO Answer you should take a look at.
I had the same issue, which I solved by putting in an additional node where each user has a number of chatrooms. put an observer on the user in the chatroom (in below case "0888a5dc-fe8d-4498-aa69-f9dd1361fe54"), with a counter, a description and a timestamp. each new message, update counter, and lastMessage, etc. see below:
"Messages" : {
"0888a5dc-fe8d-4498-aa69-f9dd1361fe54" : {
"0888a5dc-fe8d-4498-aa69-f9dd1361fe5451879163-8b35-452b-9872-a8cb4c84a6ce" : {
"counter" : 2,
"description" : "Breta",
"lastMessage" : “cool”,
"lastUser" : "51879163-8b35-452b-9872-a8cb4c84a6ce",
"messageType" : "txt",
"sortTimestamp" : -1.459518501758476E9,
"updatedAction" : 1.459518501758468E9,
"userId" : "51879163-8b35-452b-9872-a8cb4c84a6ce"
},
"0888a5dc-fe8d-4498-aa69-f9dd1361fe547bfe8604-58ad-4d18-a528-601b76dd2206" : {
"counter" : 0,
"description" : "Romeo",
"lastMessage" : “yep”,
"lastUser" : "0888a5dc-fe8d-4498-aa69-f9dd1361fe54",
"messageType" : "txt",
"sortTimestamp" : -1.459527387138615E9,
"updatedAction" : 1.459527387138613E9,
"userId" : "7bfe8604-58ad-4d18-a528-601b76dd2206"
}
}

Testing HATEOAS URLs

I'm developing a service that has a RESTful API. The API is JSON-based and uses HAL for HATEOAS links between resources.
The implementation shouldn't matter to the question, but I'm using Java and Spring MVC.
Some example requests:
GET /api/projects
{
"_links" : {
"self" : {
"href" : "example.org/api/projects"
},
"projects" : [ {
"href" : "example.org/api/projects/1234",
"title" : "The Project Name"
}, {
"href" : "example.org/api/projects/1235",
"title" : "The Second Project"
} ]
},
"totalProjects" : 2,
}
GET /api/projects/1234
{
"_links" : {
"self" : {
"href" : "example.org/api/projects/1234"
},
"tasks" : [ {
"href" : "example.org/api/projects/1234/tasks/543",
"title" : "First Task"
}, {
"href" : "example.org/api/projects/1234/tasks/544",
"title" : "Second Task"
} ]
},
"id" : 1234,
"name" : "The Project Name",
"progress" : 60,
"status" : "ontime",
"targetDate" : "2014-06-01",
}
Now, how should I test GET requests to a single project? I have two options and I'm not sure which one is better:
Testing for /api/projects/{projectId} in the tests, replacing {projectId} with the id of the project the mock service layer expects/returns.
Requesting /api/projects/ first then testing the links returned in the response. So the test will not have /api/projects/{projectId} hardcoded.
The first option makes the tests much simpler, but it basically hardcodes the URLs, which is the thing HATEOAS was designed to avoid in the first place. The tests will also need to change if I ever change the URL structure for one reason or another.
The second option is more "correct" in the HATEOAS sense, but the tests will be much more convoluted; I need to traverse all parent resources to test a child resource. For example, to test GET requests to a task, I need to request /api/projects/, get the link to /api/projects/1234, request that and get the link to /api/projects/2345/tasks/543, and finally test that! I'll also need to mock a lot more in each test if I test this way.
The advantage of the second option is that I can freely change the URLs without changing the tests.
If your goal is testing a Hypermedia API, then your testing tools need to understand how to process and act on the hypermedia contained in a resource.
And yes, the challenge is how deep you decide to traverse the link hierarchy. Also, you need to account for non-GET methods.
If these are automated tests a strategy would be to organize the tests in resource units. Only test the links returned in the resource under test: a module for projects, and others for project, tasks, task, and so on. This does require some hard-coding of well-known URLs for each module, but allows you to manage the tests more easily around your resource model.
I don't know about HATEOAS . But what I can say.
You may try a swat - a perl,curl based DSL for web, rest services test automation. Swat was designed to simplify URL "juggling" you probably talking about here. A quick reference for how this could be done by SWAT ( a strait forward way, but there are more elegant solutions ):
$ mkdir -p api/project/project_id
$ echo '200 OK' > api/project/project_id/get.txt
$ nano api/project/project_id/hook.pm
modify_resource(sub{
my $r = shift; # this is original rout api/project/project_id/
my $pid = $ENV{project_id};
$r=~s{/project_id}{$pid} # dynamically setup route to api/project/{project_id}
return $r;
});
$ project_id=12345 swat http://your-rest-api # run swat test suite!
A more complicated examples could be found at the documentation.
(*) Disclosure - I am the tool author.
If you use Spring HATEOAS you can use ControllerLinkBuilder (http://docs.spring.io/autorepo/docs/spring-hateoas/0.19.0.RELEASE/api/org/springframework/hateoas/mvc/ControllerLinkBuilder.html) for link creation in your tests as described in http://docs.spring.io/spring-hateoas/docs/0.19.0.RELEASE/reference/html/#fundamentals.obtaining-links. With ControllerLinkBuilder, there is no hard-coded URL-s.
ControllerLinkBuilderUnitTest.java (https://github.com/spring-projects/spring-hateoas/blob/4e1e5ed934953aabcf5490d96d7ac43c88bc1d60/src/test/java/org/springframework/hateoas/mvc/ControllerLinkBuilderUnitTest.java) shows how to use ControllerLinkBuilder in tests.