Is there a way to know exactly that a logs belongs to a particular requestId in cloudwatch logs for lambda.
yes the log data has request Id in it -
....
#requestId 9dbe69e3-21d4-4158-86a4-512be3300208
#timestamp 1577448608126
....
Let me know, if this is what you were looking for?
Related
This is the first time that I am setting a POST up in Peoplesoft, so need some input. The Service Operation needs to be set up with the parameters in the BODY, as I understand. I was fine with setting up GET, since all the parameters come in via the URL/URI.
How do I configure the Service Operation for the POST? I have set up the document and the message already.
We receive in json-format.
Since it a very high leve question, I'm gonna refer you to peoplebooks. Refer to page.219 in the below peoplebook. You can also find a lot of blog posts on the same topic.
https://docs.oracle.com/cd/F52214_01/psft/pdf/pt859tibr-b032022.pdf
On high level, to give you a head start,
create document definition
attach it to message definition
create service operation definition and use this message definition for request
create a message definition for response based on the expected structure
populate the document body and invoke the service operation from peoplecode.(refer attached peoplebook for code reference)
Please post a question with specifics if you get stuck somewhere.
The PubsubIO allows deduplicating messages based on the id attribute:
PubsubIO.readStrings().fromSubscription(pubSubSubscription).withIdAttribute("message_id"))
For how long does Dataflow remember this id? Is it documented anywhere?
It is documented, however it has not yet been migrated to the V2+ version of the docs. The information can still be found in the V1 docs:
https://cloud.google.com/dataflow/model/pubsub-io#using-record-ids
"If you've set a record ID label when using PubsubIO.Read, when Dataflow receives multiple messages with the same ID (which will be read from the attribute with the name of the string you passed to idLabel), Dataflow will discard all but one of the messages. However, Dataflow does not perform this de-duplication for messages with the same record ID value that are published to Cloud Pub/Sub more than 10 minutes apart."
We currently have multiple cloudwatch log streams per ec2 instance. This is horrible to debug; queries for "ERROR XY" across all instances would involve either digging into each log stream (time consuming) or using aws cli (time consuming queries).
I would prefer to have a log stream combining the log data of all instances of a specific type, let's say all "webserver" instances log their "apache2" log data to one central stream and "php" log data to another central stream.
Obviously, I still want to be able to figure out which log entry stems from which instance - as I would be with central logging via syslogd.
How can I add the custom field "instance id" to the logs in cloudwatch?
The best way to organize logs in CloudWatch Logs is as follows:
The log group represents the log type. For example: webserver/prod.
The log stream represents the instance id (i.e. the source).
For querying, I highly recommend using the Insights feature (I helped build it when I worked # AWS). The log stream name will be available with each log record as a special #logStream field.
You can query across all instances like this:
filter #message like /ERROR XY/
Or inside one instance like this:
filter #message like /ERROR XY/ and #logStream = "instance_id"
When I do a kubectl describe <pod>, the bottom section has an "Events" section, displaying Events related to that pod. For example, an event with Reason "failedScheduling", with the message "Failed for reason PodFitsResources and possibly others"
How can I query the API to return that list of events?
If I call /api/v1/namespaces/<ns>/pods/<pod_name>, it doesn't return any Events. If I try the /api/v1/events endpoint, I can specify a labelSelector parameter, but the name of the pod isn't a label of the Event, though it is in the object.involvedObject.name field.
I could request the entire Event stream and filter out the few Events that interest me client-side, but that seems like overkill. kubectl is able to do it, so I figure there must be some way that I'm missing.
Thanks.
I think events support a fieldSelector for the involved object kind and name
You can also turn the verbosity level on kubectl up to 8 to see network traces to see what it is doing
If you are still wondering how kubectl gets the events along with the describe command, then have a look at the following:
https://github.com/kubernetes/kubernetes/blob/b6a0718858876bbf8cedaeeb47e6de7e650a6c5b/pkg/kubectl/describe/versioned/describe.go#L242
If you see what's happening is that they first get the details of the resource requested (see https://github.com/kubernetes/kubernetes/blob/b6a0718858876bbf8cedaeeb47e6de7e650a6c5b/pkg/kubectl/describe/versioned/describe.go#L235 ) and then they get all the events from that namespace and filter out the events for the requested resource. See Line 242 in the same link.
So they are not using some other undocumented API or other ways, What you thought as overkill is what they are doing.
My REST API format:
http://myserver.com/rest/messages - get all messages
http://myserver.com/rest/messages/5 - get message with id=5
How should the URL look when I want to get all messages owned by a user with id=1?
http://myserver.com/rest/messages/user/1
OR
http://myserver.com/rest/usermessages/1 (no such entity UserMessage exists in the system)
This sounds like a filter/search for messages. This is usually done by query parameters:
GET http://myserver.com/rest/messages?ownedBy=1
Since /messages is a collection resource from which you want a subcollection, you filter it.