How can I switch on decreasing ttl on OpenFlow switches? - rest

I use Mininet with a custom topology and the RYU-REST controller "ofctl-rest.py". After installing some flowentries in the switches, sending some packets over the network and capturing traffic I recognize that the switches do not decrease the ttl - field in the ip - layer. I figure out that i have to tell the switches to decrease the ttl field (this is possible since OpenFlow - version 1.1). To do so I try the line "type": "DEC_NW_TTL", but it does not work. My compleate command look like this:
curl -X POST -d '{
"dpid": 1,
"cookie": 1,
"cookie_mask": 1,
"table_id": 0,
"idle_timeout": 3600,
"hard_timeout": 3600,
"priority": 0,
"flags": 1,
"match":{
"in_port": 1
},
"actions":[
{
"type":"OUTPUT",
"port": 4,
"type":"DEC_NW_TTL"
}
]
}' http://localhost:8080/stats/flowentry/add
What do I wrong? How do I have to modify the comand to let the switch reduce ttl? Please help me.
Thank you in advance.

I think you have to specify more than one action. Also you should change the actions' order. First, you need to decrement the TTL and afterwards send it the packet out. Sending the packet first and decrementing afterwards doesn't work.
I would try it this way:
curl -X POST -d '{
"dpid": 1,
"cookie": 1,
"cookie_mask": 1,
"table_id": 0,
"idle_timeout": 3600,
"hard_timeout": 3600,
"priority": 0,
"flags": 1,
"match":{
"in_port": 1
},
"actions":[
{
"type":"DEC_NW_TTL"
},
{
"type":"OUTPUT",
"port": 4
}
]
}' http://localhost:8080/stats/flowentry/add

The answer by Abbadon should work. You should put each action within a pair of brackets. However, the order of different actions in the post request doesn't matter. OpenFlow has its default order for different types of actions.
copy TTL inwards: apply copy TTL inward actions to the packet
pop: apply all tag pop actions to the packet
push-MPLS: apply MPLS tag push action to the packet
push-PBB: apply PBB tag push action to the packet
push-VLAN: apply VLAN tag push action to the packet
copy TTL outwards: apply copy TTL outwards action to the packet
decrement TTL: apply decrement TTL action to the packet
set: apply all set-field actions to the packet
qos: apply all QoS actions, such as set queue to the packet
group: if a group action is specified, apply the actions of the relevant group bucket(s) in the
order specified by this list
output: if no group action is specified, forward the packet on the port specified by the output
action

Related

what does the `port` mean in kafka zookeeper path `/brokers/ids/$id`

I got two kafka listeners with config
listeners=PUBLIC_SASL://0.0.0.0:5011,PUBLIC_PLAIN://0.0.0.0:5010
advertised.listeners=PUBLIC_SASL://192.168.181.2:5011,PUBLIC_PLAIN://192.168.181.2:5010
listener.security.protocol.map=PUBLIC_SASL:SASL_PLAINTEXT,PUBLIC_PLAIN:PLAINTEXT
inter.broker.listener.name=PUBLIC_SASL
5010 is plaintext, 5011 is sasl_plaintext.
After startup, I found this information in zookeeper(/brokers/ids/$id):
{
"listener_security_protocol_map": {
"PUBLIC_SASL": "SASL_PLAINTEXT",
"PUBLIC_PLAIN": "PLAINTEXT"
},
"endpoints": [
"PUBLIC_SASL://192.168.181.2:5011",
"PUBLIC_PLAIN://192.168.181.2:5010"
],
"jmx_port": -1,
"features": { },
"host": "192.168.181.2",
"timestamp": "1658485899402",
"port": 5010,
"version": 5
}
What does the port filed mean? Why the port is 5010? Could I change it to 5011?
What you're seeing are advertised.port and advertised.host Kafka settings, which may be parsed from the advertised.listener list for backward compatibility, but both of these are deprecated, however, and the Kafka protocol now uses the protocol map and corresponding endpoints list, instead.

How to find current a0 parameter in Cardano node

Is their any way to get current value of a0 parameter from node.
The parameters are not found in configuration file. These are not passed as parameter when the node is started.
You can get the protocol parameters using cardano-cli:
$ cardano-cli query protocol-parameters --mary-era --mainnet
{
"poolDeposit": 500000000,
"protocolVersion": {
"minor": 0,
"major": 4
},
"minUTxOValue": 1000000,
"decentralisationParam": 0.12,
"maxTxSize": 16384,
"minPoolCost": 340000000,
"minFeeA": 44,
"maxBlockBodySize": 65536,
"minFeeB": 155381,
"eMax": 18,
"extraEntropy": {
"tag": "NeutralNonce"
},
"maxBlockHeaderSize": 1100,
"keyDeposit": 2000000,
"nOpt": 500,
"rho": 3.0e-3,
"tau": 0.2,
"a0": 0.3
}
a0 is the last key in the above output.
Using the Blockfrost API you can get parameters at the time of each epoch including the a0 parameter:
https://cardano-mainnet.blockfrost.io/api/v0/epochs/{number}/parameters
API docs:
https://docs.blockfrost.io/#tag/Cardano-Epochs/paths/~1epochs~1%7Bnumber%7D~1parameters/get

How to create a schedule in pagerduty using restApi and python

When through the documentation of pagerduty was but still not able to understand what parameters to send in the request body and also facing trouble in understanding how to make the api request.If any one can share the sample code on making a pagerduty schedule that would help me alot.
Below is the sample code to create schedules in PagerDuty.
Each list can have multiple items (to add more users / layers)
import requests
url = "https://api.pagerduty.com/schedules?overflow=false"
payload={
"schedule": {
"schedule_layers": [
{
"start": "<dateTime>", # Start Time of layer | "start": "2021-01-01T00:00:00+05:30",
"users": [
{
"user": {
"id": "<string>", # ID of user to add in layer
"summary": "<string>",
"type": "<string>", # "type": "user"
"self": "<url>",
"html_url": "<url>"
}
}
],
"rotation_virtual_start": "<dateTime>", # Start of layer | "rotation_virtual_start": "2021-01-01T00:00:00+05:30",
"rotation_turn_length_seconds": "<integer>", # Layer rotation, for multiple user switching | "rotation_turn_length_seconds": <seconds>,
"id": "<string>", # Auto-generated. Only needed if you want update and existing Schedule Layer
"end": "<dateTime>", # End Time of layer | "end": "2021-01-01T00:00:00+05:30",
"restrictions": [
{
"type": "<string>", # To restrict shift to certain timings Weekly daily etc | "type": "daily_restriction",
"duration_seconds": "<integer>", # Duration of layer | "duration_seconds": "300"
"start_time_of_day": "<partial-time>", #Start time of layer | "start_time_of_day": "00:00:00",
"start_day_of_week": "<integer>"
}
],
"name": "<string>", # Name to give Layer
}
]
"time_zone": "<activesupport-time-zone>", # Timezone to set for layer and its timings | "time_zone": "Asia/Kolkata",
"type": "schedule",
"name": "<string>", # Name to give Schedule
"description": "<string>",# Description to give Schedule
"id": "<string>", # Auto-generated. Only needed if you want update and existing Schedule Layer
}
}
headers = {
'Authorization': 'Token token=<Your token here>',
'Accept': 'application/vnd.pagerduty+json;version=2',
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, json=payload)
print(response.text)
Best way to do this is to get the postman collection for PagerDuty and edit the request as per your liking. Once you get a successful response, convert that into code using the inbuilt feature of postman.
Using PagerDuty API for scheduling is not easy. Creating new schedule is okaish, but if you decide to update schedule - it is definitely not trivial. You'll probably occur bunch of limitation: number of restriction per layer, must reuse current layers, etc.
As option you can use a python library pdscheduling https://github.com/skrypka/pdscheduling

Sensu remediation does not work

I have configured the following check:
"cron": {
"command": "check-process.rb -p cron",
"subscribers": [],
"handlers": [
"mailer",
"flowdock",
"remediator"],
"interval": 10,
"occurences": 3,
"refresh": 600,
"standalone": false,
"remediation": {
"light_remediation": {
"occurrences": [1, 2],
"severities": [2]
}
}
},
"light_remediation": {
"command": "touch /tmp/test",
"subscribers": [],
"handlers": ["flowdock"],
"publish": false,
"interval": 10
},
Mailer and flowdock handlers are being executed as expected, so I am receiving e-mails and flowdock notifications when cron service is not running. The problem is that remediator check is not working and I have no idea why. I have used this: https://github.com/nstielau/sensu-community-plugins/blob/remediation/handlers/remediation/sensu.rb
I ran into similar issues but finally managed to get it working with some modifications.
First off, the gotchas:
Each server (client.json.template) needs to subscribe to a channel $HOSTNAME
"subscribers": ["$HOSTNAME"],
You don't have a "trigger_on" section, which is in the code but not the example and you want to set that up to trigger on the $HOSTNAME as well.
my_chek.json.template
"trigger_on": ["$HOSTNAME"]
The remediation checks need to subscribe to $HOSTNAME as well (so you need to template the checks out as well)
"subscribers": ["$HOSTNAME"],
At this point, you should be able to trigger your remediation from the sensu server manually.
Lastly, the example code listed in sensu.rb is broken... The occurrences check needs to be up one level in the loop, and the trigger_on is not inside the remediations section, it's outside.
subscribers = #event['check']['trigger_on'] ? [#event['check']['trigger_on']].flatten : [client]
...
# Check remediations matching the current severity
next unless (conditions["severities"] || []).include?(severity)
remediations_to_trigger << check
end
end
remediations_to_trigger
end
After that, it should work for you.
Oh, and one last gotcha. In your client.json.template
"safe_mode": true
It defaults to false...

Asterisk REST ARI snoop (cURL)

I try to:
curl -v -u j123:j321 -X POST "http://localhost:8088/ari/channels/1421226074.4874/snoop?spy=SIP/695"
In response to receiving:
"message": "Invalid direction specified for spy"
I try to:
SIP/695; SIP:695, SIP#695, localhost#695, channel, channelName
It's all not working.
Call comes into the queue from sip-416 to queue_1 and distribute to 694. I need to connect 695 for wiretapping channel 1421226074.4874.
I only need to listen and not to whisper.
Help me please)
The error message is telling you what the problem is:
"message": "Invalid direction specified for spy"
The spy parameter is a direction for spying, not the channel to spy on (see reference documentation here). You've already specified the channel to snoop on in the URI path - you need to specify the direction of the media in the spy parameter.
As an aside, apparently the auto generated wiki isn't display enum values, which is unfortunate. We'll have to fix that.
For reference, here's the parameter in the Swagger JSON:
"name": "spy",
"description": "Direction of audio to spy on",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string",
"defaultValue": "none",
"allowableValues": {
"valueType": "LIST",
"values": [
"none",
"both",
"out",
"in"
]
}