I am trying to call REST web service as follows:
my $client = REST::Client->new();
$client->setHost("http://myhost.com");
$client->POST('/xx/yy/Submit',
$params,
{'Content-type' => 'application/json'}
);
my $response = $client->responseContent();
I do not how to create parameter list, api definition is as follows:
{
"Credential": {
"Username": "String",
"Password": "String"
},
"DataCoding": "Default",
"Header": {
"From": "String",
"ValidityPeriod": 0
},
"Message": "String",
"To": ["String"]
}
I tried following but did not work:
my %params = (Credential => {
Username => $username,
Password => $password
},
DataCoding => 'Default',
Header => {
From => $from,
ValidityPeriod => 0
},
Message => "Test",
To => ['90535xxxx','90542xxxxx']
);
$client->POST('/v1/xml/syncreply/Submit',
encode_json(\%params),
{'Content-type' => 'application/json',
'Accept' => 'application/json'
}
);
It gives following error:
<ErrorCode>SerializationException</ErrorCode>
Error: System.Runtime.Serialization.SerializationException: There was an error deserializing the object of type Barabut.Gw.Submit. The data at the root level is invalid. Line 1, position 1. ---> System.Xml.XmlException: The data at the root level is invalid. Line 1, position 1.
I wrote end-point address incorrect, it worked at the format I wrote, sorry for this situation.
Related
The most popular way of coding & deploying logstash pipelines is to create a my_pipeline.conf file and run it like
bin/logstash -f conf/my_pipeline.conf
Elastic offers an alternative consisting of apis:
logstash PUT api
PUT _logstash/pipeline/my_pipeline
{
"description": "Sample pipeline for illustration purposes",
"last_modified": "2021-01-02T02:50:51.250Z",
"pipeline_metadata": {
"type": "logstash_pipeline",
"version": "1"
},
"username": "elastic",
"pipeline": "input {}\n filter { grok {} }\n output {}",
"pipeline_settings": {
"pipeline.workers": 1,
"pipeline.batch.size": 125,
"pipeline.batch.delay": 50,
"queue.type": "memory",
"queue.max_bytes.number": 1,
"queue.max_bytes.units": "gb",
"queue.checkpoint.writes": 1024
}
}
as well as kibana api that also upsert the logstah pipeline
kibana api
PUT <kibana host>:<port>/api/logstash/pipeline/<id>
$ curl -X PUT api/logstash/pipeline/hello-world
{
"pipeline": "input { stdin {} } output { stdout {} }",
"settings": {
"queue.type": "persisted"
}
}
As you can see in both apis, the content of the logstash "pipeline.conf"file is included in the "pipeline" key of the json body of the HTTP call.
Basically I have dozens of *.conf pipelines files and I would like to avoid developping complex code to parse them to reformat their content with espace characters for new lines, carriage returns...
My question is: do you know an "easy" way to feed this "pipeline" parameter in the body of the HTTP call with as little formatting transformations as possible of the original .conf files?
To illustrate how complex this formatting operation might be, I have an example of what a terraform provider does behind the scenes to generate the right expected format from a simple pipeline ".conf" file.
Here is the original content of the file logs_alerts_pubsub.conf:
input {
google_pubsub {
project_id => "pj-becfr-monitoring-mgmt"
topic => "f7_monitoring_topic_${environment}_alerting_eck"
subscription => "f7_monitoring_subscription_${environment}_alerting_eck"
json_key_file => "/usr/share/logstash/config/logstash-sa.json"
codec => "json"
}
}
filter {
mutate {
add_field => { "application_code" => "a-alerting-eck"
"leanix_id" => "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
"workfront_id" => "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
}
}
output {
elasticsearch {
index => "alerts-%%{+yyyy.MM.dd}"
hosts => [ "${url}" ]
user => "elastic"
ssl => true
ssl_certificate_verification => false
password => "${pwd}"
cacert => "/etc/logstash/certificates/ca.crt"
}
}
Here is the terraform code:
locals {
pipeline_list = fileset(path.root, "./modules/elasticsearch_logstash_pipeline/*.conf")
splitpipepath = split("/", var.pipeline)
pipename = element(local.splitpipepath, length(local.splitpipepath) - 1)
pipename_ex = split(".", local.pipename)[0]
category = split("_", local.pipename_ex)[1]
}
resource "kibana_logstash_pipeline" "newpipeline" {
for_each = local.pipeline_list
name = "tf-${local.category}-${var.environment}-${local.pipename_ex}"
description = "Logstash Pipeline through Kibana from file"
pipeline = templatefile(var.pipeline, { environment = var.environment, url = var.elastic_url, pwd = var.elastic_password })
settings = {
"queue.type" = "persisted"
}
}
And below you see the content of the tf.state file (focus on the "pipeline" key):
{
"module": "module.elasticsearch_logstash_pipeline[\"modules/elasticsearch_logstash_pipeline/logs_alerts_pubsub.conf\"]",
"mode": "managed",
"type": "kibana_logstash_pipeline",
"name": "newpipeline",
"provider": "provider[\"registry.terraform.io/disaster37/kibana\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"description": "Logstash Pipeline through Kibana from file",
"id": "tf-alerts-dev-logs_alerts_pubsub",
"name": "tf-alerts-dev-logs_alerts_pubsub",
"pipeline": "input {\n google_pubsub {\n project_id =\u003e \"pj-becfr-monitoring-mgmt\"\n topic =\u003e \"f7_monitoring_topic_dev_alerting_eck\"\n subscription =\u003e \"f7_monitoring_subscription_dev_alerting_eck\"\n json_key_file =\u003e \"/usr/share/logstash/config/logstash-sa.json\"\n codec =\u003e \"json\"\n }\n }\nfilter {\n mutate {\n add_field =\u003e { \"application_code\" =\u003e \"a-alerting-eck\"\n \"leanix_id\" =\u003e \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\"\n \"workfront_id\" =\u003e \"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\n }\n }\n}\noutput {\n elasticsearch {\n index =\u003e \"alerts-gcp\"\n hosts =\u003e [ \"https://35.187.29.254:9200\" ]\n user =\u003e \"elastic\"\n ssl =\u003e true\n ssl_certificate_verification =\u003e false\n password =\u003e \"HIDDEN\"\n cacert =\u003e \"/etc/logstash/certificates/ca.crt\"\n }\n}",
"settings": {
"queue.type": "persisted"
},
"username": "elastic"
},
"sensitive_attributes": [
[
{
"type": "get_attr",
"value": "pipeline"
}
]
],
"private": "bnVsbA=="
}
]
}
If you have any idea of straightforward commands either in bash or in any language where I could do dump/load or encode/decode or any simple regex, as generic as possible, it would be helpful (FYI in this specific context I cannot use terraform)
I found a way to substitute the variabkes inside the <pipeline>.conf files as well as a way to correctly format the content of that file as json string. To restart from the beginning, here is the content of the logstash pipeline file logs_alerts_pubsub.conf:
input {
google_pubsub {
project_id => "pj-becfr-monitoring-mgmt"
topic => "f7_monitoring_topic_${environment}_alerting_eck"
subscription => "f7_monitoring_subscription_${environment}_alerting_eck"
json_key_file => "/usr/share/logstash/config/logstash-sa.json"
codec => "json"
}
}
filter {
mutate {
add_field => { "application_code" => "a-alerting-eck"
"leanix_id" => "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
"workfront_id" => "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
}
}
output {
elasticsearch {
index => "alerts-%%{+yyyy.MM.dd}"
hosts => [ "${url}" ]
user => "elastic"
ssl => true
ssl_certificate_verification => false
password => "${pwd}"
cacert => "/etc/logstash/certificates/ca.crt"
}
}
Now the variable substitution with their values:
export url=google.com
export pwd=HjkTdddddss
export environment=dev
envsubst < logs_alerts_pubsub.conf
input {
google_pubsub {
project_id => "pj-becfr-monitoring-mgmt"
topic => "f7_monitoring_topic_dev_alerting_eck"
subscription => "f7_monitoring_subscription_dev_alerting_eck"
json_key_file => "/usr/share/logstash/config/logstash-sa.json"
codec => "json"
}
}
filter {
mutate {
add_field => { "application_code" => "a-alerting-eck"
"leanix_id" => "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
"workfront_id" => "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
}
}
output {
elasticsearch {
index => "alerts-%%{+yyyy.MM.dd}"
hosts => [ "google.com" ]
user => "elastic"
ssl => true
ssl_certificate_verification => false
password => "HjkTdddddss"
cacert => "/etc/logstash/certificates/ca.crt"
}
}
Now the formatting of the pipeline file as json string:
jq -c -Rs "." <(envsubst < logs_alerts_pubsub.conf)
"input {\n google_pubsub {\n project_id => \"pj-becfr-monitoring-mgmt\"\n topic => \"f7_monitoring_topic_dev_alerting_eck\"\n subscription => \"f7_monitoring_subscription_dev_alerting_eck\"\n json_key_file => \"/usr/share/logstash/config/logstash-sa.json\"\n codec => \"json\"\n }\n }\nfilter {\n mutate {\n add_field => { \"application_code\" => \"a-alerting-eck\"\n \"leanix_id\" => \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\"\n \"workfront_id\" => \"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\n }\n }\n}\noutput {\n elasticsearch {\n index => \"alerts-%%{+yyyy.MM.dd}\"\n hosts => [ \"google.com\" ]\n user => \"elastic\"\n ssl => true\n ssl_certificate_verification => false\n password => \"HjkTdddddss\"\n cacert => \"/etc/logstash/certificates/ca.crt\"\n }\n}"
I am using the bootstrap datepicker in my view and every time I call the update method, the date sets fine but the date picker suddenly reverts from dd/mm/yyyy format to mm/dd/yyyy format.
Controller
public function actionDias(){
Yii::$app->response->format = \yii\web\Response::FORMAT_JSON;
if (isset($_POST['anio']) and isset($_POST['cuatrimestre'])) {
$cuatrimestre = $_POST['cuatrimestre'];
$anio = $_POST['anio'];
$fechaMin = CampusActividadDia::find()->where(['cuatrimestre' => $cuatrimestre])->andWhere(['anio' => $anio])->min('fecha');
$fecha_desde = date("d/m/Y", strtotime($fechaMin));
$fechaMax = CampusActividadDia::find()->where(['cuatrimestre' => $cuatrimestre])->andWhere(['anio' => $anio])->max('fecha');
$fecha_hasta = date("d/m/Y", strtotime($fechaMax));
return ([
'fecha_desde' => $fecha_desde,
'fecha_hasta' => $fecha_hasta
]);
} else return null;
}
View
<div id="dias" class="form-group hidden">
<label class="control-label">Rango de Fechas:</label>
<?php
echo DatePicker::widget([
'model' => $model,
'attribute' => 'fecha_desde',
'attribute2' => 'fecha_hasta',
'options' => ['id'=>'fecha_desde','disabled' => ($model->fecha_desde == "")],
'options2' => ['id'=>'fecha_hasta','disabled' => ($model->fecha_desde == "")],
'type' => DatePicker::TYPE_RANGE,
'separator' => 'hasta',
'pluginOptions' => [
'autoclose' => true,
'format' => 'dd/mm/yyyy'
]
]); ?>
</div>
Js
$.ajax({
url: URL_BASE + "/campus/dias",
type: "post",
dataType: "json",
data: {
cuatrimestre: this.value,
anio: $('#anio').val()
},
success: function (res) {
const fecha_desde = res.fecha_desde;
const fecha_hasta = res.fecha_hasta
$(fechaDesdeRef).kvDatepicker('update',new Date(fecha_desde));
$(fechaHastaRef).kvDatepicker('update', new Date(fecha_hasta));
$(fechaDesdeRef).prop('disabled', false);
$(fechaHastaRef).prop('disabled', false);
},
error: function (xhr, ajaxOptions, thrownError) {
alert("ERROR " + xhr.status);
},
});
I've also tried to change date format both ways setting the global default format and locally but without success, like the following:
$.fn.kvDatepicker.defaults.format = 'dd/mm/yyyy'
$(datePickerRef).kvDatepicker({
formatDate: 'd/m/Y',
date:'d/m/Y'
})
I've also read this issue, but there isn't a conclution about it:
https://github.com/uxsolutions/bootstrap-datepicker/issues/1775
Sounds like that your DatePicker is overwriting pluginOptions when model is loaded. Might need to config DatePicker or overwrite it to stop missing format
config/main.php
'modules' => [
'datecontrol' => [
'class' => 'kartik\datecontrol\Module',
'displaySettings' => [
'date' => 'php:d-M-Y',
'time' => 'php:H:i:s A',
'datetime' => 'php:d-m-Y H:i:s A',
],
'saveSettings' => [
'date' => 'php:Y-m-d',
'time' => 'php:H:i:s',
'datetime' => 'php:Y-m-d H:i:s',
],
// automatically use kartikwidgets for each of the above formats
'autoWidget' => true,
],
],
'components' => [
//http://webtips.krajee.com/override-bootstrap-css-js-yii-2-0-widgets/
'assetManager' => [
'bundles' => [
'yii\bootstrap4\BootstrapAsset' => [
'sourcePath' => '#npm/bootstrap/dist'
],
'yii\bootstrap4\BootstrapPluginAsset' => [
'sourcePath' => '#npm/bootstrap/dist'
],
'#frontend\modules\invoice\assets\CoreCustomCssJsAsset'=>[
'sourcePath' => '#frontend/modules/invoice/assets'
],
],
composer.json
"require": {
"php": ">=7.4.9",
"guzzlehttp/guzzle":"*",
"yiisoft/yii2": "~2.0.41.1",
"yiisoft/yii2-bootstrap": "~2.0.9",
"yiisoft/yii2-bootstrap4": "^2.0.8",
"bower-asset/bootstrap": "~3.4.1",
"bower-asset/ladda": "*",
"bower-asset/font-awesome": "~4.7.0.0",
"npm-asset/jquery": "^2.2",
"vlucas/phpdotenv": "*",
"insolita/yii2-migration-generator": "~3.1",
"ifsnop/mysqldump-php": "*",
"warrence/yii2-kartikgii": "dev-master",
"kartik-v/yii2-bootstrap4-dropdown": "#dev",
"kartik-v/bootstrap-fileinput": "dev-master",
"kartik-v/yii2-editable": "#dev",
"kartik-v/yii2-grid":"#dev",
"kartik-v/yii2-widget-timepicker": "#dev",
"kartik-v/yii2-date-range": "*",
"kartik-v/yii2-social": "#dev",
"kartik-v/yii2-dynagrid": "dev-master",
"kartik-v/yii2-tree-manager": "#dev",
"kartik-v/yii2-mpdf":"dev-master",
"kartik-v/bootstrap-star-rating": "#dev",
"kartik-v/yii2-slider": "dev-master",
"kartik-v/yii2-number" : "#dev",
"kartik-v/yii2-editors": "#dev",
"kartik-v/yii2-validators": "dev-master",
},
"require-dev": {
"yiisoft/yii2-debug": "~2.1.0",
"yiisoft/yii2-gii": "~2.1.0",
"yiisoft/yii2-faker": "~2.0.0",
"codeception/codeception": "^4.0",
"codeception/verify": "~0.5.0 || ~1.1.0"
},
"config": {
"process-timeout": 1800
},
"fxp-asset": {
"installer-paths": {
"npm-asset-library": "vendor/npm-asset",
"bower-asset-library": "vendor/bower-asset"
}
},
These settings have worked for me. You might considering modifying your AppAsset.php as follows to incorporate bootstrap 4.
<?php
namespace frontend\assets;
use yii\web\AssetBundle;
use yii\bootstrap4\BootstrapAsset;
use yii\web\YiiAsset;
use yii\web\JqueryAsset;
/**
* Main frontend application asset bundle.
*/
class AppAsset extends AssetBundle
{
public $sourcePath = '#app/assets/app';
public $baseUrl = '#app';
public $css = [
'css/site.css',
'//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap-glyphicons.css',
'//use.fontawesome.com/releases/v5.3.1/css/all.css',
'//cdnjs.cloudflare.com/ajax/libs/font-awesome/5.9.0/css/all.min.css'
];
public $js = [
'//kit.fontawesome.com/85ba10e8d4.js',
];
public $depends = [
BootstrapAsset::class,
YiiAsset::class,
JqueryAsset::class,
];
}
In perl I can push $hashref into #array and have this data for next foreach and possible encode_json (HTTP POST).
I can't figure out how to recreate the same login in golang?
$VAR1 = [
{
'address' => 'test.com',
'id' => 101,
'hostgroups' => [
zero
'one',
'or many'
],
'host_name' => 'test.com',
'alias' => 'test.com',
'template' => 'generic',
'file_id' => 'etc/config'
},
{
'address' => 'test2.com',
'id' => 102,
'hostgroups' => [
zero
'one',
'or many'
],
'host_name' => 'test2.com',
'alias' => 'test2.com',
'template' => 'generic',
'file_id' => 'etc/config'
},
(..)
var array = []map[string]interface{}{
{"address": "test.com", "hostgroups": []string{"zero", "one", "or many"}, "id": 101},
{"address": "test2.com", "hostgroups": []string{"zero", "one", "or many"}, "id": 102},
}
This is the answer.
type host map[string]interface{}
var hosts []host
h := host{
"id": id,
"file_id": "etc/config/hosts.cfg",
"host_name": host_name,
"alias": host_name,
"address": host_name,
"hostgroups": hg,
"template": "generic-host",
}
hosts = append(hosts, h)
I wanna create a creative with api. When i post object_story_spec parameter, i got this error 'Creative spec must be an associative array (optionally json-encoded)'
this is my json value it is valid.
{ "page_id" : "103830656322074", "link_data": { "call_to_action": {"type":"LEARN_MORE","value":{"link":"facebook.com/"}}, "caption": "Reklam #1", "name": "Reklam #1", "link": "facebook.com/", "message": "facebook.com/" }}
developers.facebook.com/docs/marketing-api/reference/ad-creative#Creating
you should url enocde the $object_story_spec before passing into the creative like below.
$object_story_spec = urlencode($object_story_spec);
$creative = new AdCreative(null, 'ad_Acount_id');
$creative->setData(array(
AdCreativeFields::NAME => 'Sample Creative',
AdCreativeFields::OBJECT_STORY_SPEC => $object_story_spec,
));
It should be like something.
object_story_spec={
"page_id": "<PAGE_ID>",
"video_data": {
"call_to_action": {"type":"LIKE_PAGE","value":{"page":"<PAGE_ID>"}},
"description": "try it out",
"image_url": "<THUMBNAIL_URL>",
"video_id": "<VIDEO_ID>"
}
}
Or
$object_story_spec = new ObjectStorySpec();
$object_story_spec->setData(array(
ObjectStorySpecFields::PAGE_ID => <PAGE_ID>,
ObjectStorySpecFields::LINK_DATA => <LINK_DATA>,
));
$creative = new AdCreative(null, 'ad_Acount_id');
$creative->setData(array(
AdCreativeFields::NAME => 'Sample Creative',
AdCreativeFields::OBJECT_STORY_SPEC => $object_story_spec,
));
Just cast the targeting field to string, For example, if you are using requests:
import requests
params = {
'name': 'My Ad Set',
'optimization_goal': 'LINK_CLICKS',
'billing_event': 'IMPRESSIONS',
'bid_amount': 2,
'daily_budget': 100,
'campaign_id': campaign_id,
"targeting": str({
"age_max": 65,
"age_min": 18,
"flexible_spec": [..]
}),
'start_time': datetime.now().strftime("%Y-%m-%dT%H:%M:%SZ"),
'status': 'PAUSED',
'access_token': access_token,
}
response = requests.post(
url=f'https://graph.facebook.com/v11.0/act_{ad_account_id}/adsets',
params=params
)
I inherited a script and I need to be able to access some data from a hash. I want to be able to access the MB_Path value from the following.
$VAR1 = bless(
{
'ME_Parts' => [
bless(
{
'ME_Bodyhandle' => bless(
{
'MB_Path' => '/tmp/msg-15072-1.txt'
},
'MIME::Body::File'
),
'ME_Parts' => [],
'mail_inet_head' => bless(
{
'mail_hdr_foldlen' => 79,
'mail_hdr_modify' => 0,
'mail_hdr_list' => [
'Content-Type: text/plain; charset="us-ascii"',
'Content-Transfer-Encoding: quoted-printable'
],
'mail_hdr_hash' => {
'Content-Type' => [
\$VAR1->{'ME_Parts'}[0]{'mail_inet_head'}
{'mail_hdr_list'}[0]
],
'Content-Transfer-Encoding' => [
\$VAR1->{'ME_Parts'}[0]{'mail_inet_head'}
{'mail_hdr_list'}[1]
]
},
'mail_hdr_mail_from' => 'KEEP',
'mail_hdr_lengths' => {}
},
'MIME::Head'
)
},
'MIME::Entity'
),
bless(
{
'ME_Bodyhandle' => bless(
{
'MB_Path' => '/tmp/msg-15072-2.html'
},
'MIME::Body::File'
),
'ME_Parts' => [],
'mail_inet_head' => bless(
{
'mail_hdr_foldlen' => 79,
'mail_hdr_modify' => 0,
'mail_hdr_list' => [
'Content-Type: text/html;charset="us-ascii"',
'Content-Transfer-Encoding: quoted-printable'
],
'mail_hdr_hash' => {
'Content-Type' => [
\$VAR1->{'ME_Parts'}[1]{'mail_inet_head'}
{'mail_hdr_list'}[0]
],
'Content-Transfer-Encoding' => [
\$VAR1->{'ME_Parts'}[1]{'mail_inet_head'}
{'mail_hdr_list'}[1]
]
},
'mail_hdr_mail_from' => 'KEEP',
'mail_hdr_lengths' => {}
},
'MIME::Head'
)
},
'MIME::Entity'
)
],
'ME_Epilogue' => [],
'ME_Preamble' => [],
'mail_inet_head' => bless(
{
'mail_hdr_foldlen' => 79,
'mail_hdr_modify' => 0,
'mail_hdr_list' => [
'Content-Type: multipart/alternative;boundary="----_=_NextPart_002_01CEB949.DC6B0180"'
],
'mail_hdr_hash' => {
'Content-Type' =>
[ \$VAR1->{'mail_inet_head'}{'mail_hdr_list'}[0] ]
},
'mail_hdr_mail_from' => 'KEEP',
'mail_hdr_lengths' => {}
},
'MIME::Head'
)
'MIME::Entity'
);
I thought I could simply do the following
print $ent->parts->($i)->{ME_Bodyhandle}->{MB_Path};
However when I do that I get and error that the value is not initialized. But when I do dump of just $ent->parts->($i) I get the above code.
I am just stuck on this one.
Thanks,
Leo C
You don't have a hash, you have an object (which happens to be implemented as a hash). That's why the Data::Dumper output keeps saying bless(...). You shouldn't be poking into its internals.
I think you're looking for
$ent->parts($i)->bodyhandle->path;
Until you have exhausted the possibilities of the documentation, there is no excuse for dumping the underlying data structure that represents a Perl object and hard-coding access to its components. The rules of encapsulation apply to Perl object-oriented programming just as much as any other language.
The documentation for
MIME::Entity
and
MIME::Body
is quite clear, and the code you need is something like this
for my $part ($ent->parts) {
my $path = $part->bodyhandle->path;
print $path, "\n";
}
output
/tmp/msg-15072-1.txt
/tmp/msg-15072-2.html
This:
print $ent->parts->($i)->{ME_Parts}->[$i]->{ME_Bodyhandle}->{MB_Path};