Wazuh child decoder not parsing field correctly - ossec

I am trying to parse a log as shown below with a child decoder in wazuh 4.x, for some reason its not parsing the needed field
Log entry
ossec: output: 'domainjoin-cli query|grep -i Domain': Domain = mydomain.local
Child Decoder
<decoder name="ossec-domain">
<parent>ossec</parent>
<type>ossec</type>
<prematch>^ossec: output:</prematch>
<regex type="pcre2">^'domainjoin-cli[ \t]query|grep[ \t]-i[ \t]Domain':[ \t]Domain[ \t]=[ \t](\S+)</regex>
<order>domain</order>
</decoder>
Output
ossec: output: 'domainjoin-cli query|grep -i Domain': Domain = mydomain.local
**Phase 1: Completed pre-decoding.
full event: 'ossec: output: 'domainjoin-cli query|grep -i Domain': Domain = mydomain.local'
**Phase 2: Completed decoding.
name: 'ossec'
parent: 'ossec'
**Phase 3: Completed filtering (rules).
id: '100008'
level: '3'
description: 'Server is in domain '
groups: '['ossec']'
firedtimes: '1'
hipaa: '['164.312.b']'
mail: 'False'
pci_dss: '['10.6.1']'
**Alert to be generated.

Taking into account the parent decoder:
<decoder name="ossec">
<prematch>^ossec: </prematch>
<type>ossec</type>
</decoder>
First of all, you should delete the prematch tag since the parent has already a prematch regex. In case you want to leave the prematch, you can also use the offset field to indicate that the string output comes after ossec: .
<decoder name="ossec-domain">
<parent>ossec</parent>
<type>ossec</type>
<prematch offset="after_parent>^output:</prematch>
<regex type="pcre2">^'domainjoin-cli[ \t]query|grep[ \t]-i[ \t]Domain':[ \t]Domain[ \t]=[ \t](\S+)</regex>
<order>domain</order>
</decoder>
After that, note that the regex is wrong as you are using ^. ^ indicates the beginning of the log and in this case, the string after that character is not the beginning of the log. You have to remove that character from regex.
Also, you have to take into account that | indicates an OR operator which means that one regex (left) or the other (right) should match the log. In your use case, this should indicate the character so you will need to escape it not to use it as an OR operator.
Taking into account these indications, the following decoder is the one you should use:
<decoder name="ossec-domain">
<parent>ossec</parent>
<type>ossec</type>
<prematch offset="after_parent">^output:</prematch>
<regex type="pcre2">'domainjoin-cli[ \t]query\|grep[ \t]-i[ \t]Domain':[ \t]Domain[ \t]=[ \t](\S+)</regex>
<order>domain</order>
</decoder>
Logtest output:
ossec: output: 'domainjoin-cli query|grep -i Domain': Domain = mydomain.local
**Phase 1: Completed pre-decoding.
full event: 'ossec: output: 'domainjoin-cli query|grep -i Domain': Domain = mydomain.local'
**Phase 2: Completed decoding.
name: 'ossec'
parent: 'ossec'
domain: 'mydomain.local'
I hope this helps, if you have more problems please tell me the Wazuh version you are using and I will be glad to help.

Related

Mikrotik - result of script job as var is empty

I need to now some info about LTE/3G connection from output of command with Zabbix. Routerboard is 6.47.7 version.
[admin#Mikrotik_24] > interface lte info lte1 once
pin-status: ok
registration-status: registered
functionality: full
manufacturer: "MikroTik"
model: "R11e-LTE"
revision: "MikroTik_CP_2.160.000_v008"
current-operator: MTS
psc: 295
lac: 5205
current-cellid: 241903616
access-technology: 3G
session-uptime: 21h3m18s
imei: 355654090621868
imsi: 250016652966098
uicc: 89701011266529660988
earfcn: 10762
ecno: 0dB
rssi: -95dBm
For example "rssi", "access-technology", "uicc" and so on.
The problem with such command described by themselves - the data goes infinitely, and you cant catch it easy.
https://wiki.mikrotik.com/wiki/Manual:Scripting_Tips_and_Tricks#Get_values_from_looped_interactive_commands_like_.22monitor.22
So I have a script to get some values as a global var.
/interface lte info lte1 once do={:global at $"access-technology" }
and ":put $at" in terminal works fine. But I think to take result with SNMP by executing another script to show this generated var $at. It is possible with OID they did for "script result" purpose.
The question is how to output this var in script?
[admin#Mikrotik_24] > /system script export compact
# oct/31/2020 01:52:57 by RouterOS 6.47.7
# software id = PW8X-NNQ5
#
# model = RouterBOARD wAP R-2nD
/system script
add dont-require-permissions=no name=at owner=admin policy=\
ftp,reboot,read,write,policy,test,password,sniff,sensitive,romon source=\
"/interface lte info lte1 once do={:global at \$\"access-technology\" } \r\
\n"
add dont-require-permissions=no name=at_result owner=admin policy=\
ftp,reboot,read,write,policy,test,password,sniff,sensitive,romon source=":put \$at"
[admin#Mikrotik_24] > :environment print
at="3G"
[admin#Mikrotik_24] > /system script environment print
# NAME VALUE
0 at 3G
[admin#Mikrotik_24] > /system script run at_result
[admin#Mikrotik_24] >
You see, /system script run at_result is empty, but in terminal I can see it
[admin#Mikrotik_24] > :put $at
3G
[admin#Mikrotik_24] >
Ok. The right form of script var is
{
:global test
set $test "my data"
}
I have to declare it first.

How to use kubernetes python sdk to redeploy a deployment

version info:
python3.7
kubernetes==8.0.0
doc: https://github.com/kubernetes-client/python/tree/release-8.0/kubernetes
I only found the update API, not the redeploy API。
thanks
If you want to partially update an existing deployment use PATCH method. An example below
# create an instance of the API class
api_instance = kubernetes.client.AppsV1Api(kubernetes.client.ApiClient(configuration))
name = 'name_example' # str | name of the Deployment
namespace = 'namespace_example' # str | object name and auth scope, such as for teams and projects
body = NULL # object |
pretty = 'pretty_example' # str | If 'true', then the output is pretty printed. (optional)
dry_run = 'dry_run_example' # str | When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed (optional)
try:
api_response = api_instance.patch_namespaced_deployment(name, namespace, body, pretty=pretty, dry_run=dry_run)
pprint(api_response)
except ApiException as e:
print("Exception when calling AppsV1Api->patch_namespaced_deployment: %s\n" % e)
If you want to replace the existing deployment with a new deployment use PUT method. An example below
# create an instance of the API class
api_instance = kubernetes.client.AppsV1Api(kubernetes.client.ApiClient(configuration))
name = 'name_example' # str | name of the Deployment
namespace = 'namespace_example' # str | object name and auth scope, such as for teams and projects
body = kubernetes.client.V1Deployment() # V1Deployment |
pretty = 'pretty_example' # str | If 'true', then the output is pretty printed. (optional)
dry_run = 'dry_run_example' # str | When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed (optional)
try:
api_response = api_instance.replace_namespaced_deployment(name, namespace, body, pretty=pretty, dry_run=dry_run)
pprint(api_response)
except ApiException as e:
print("Exception when calling AppsV1Api->replace_namespaced_deployment: %s\n" % e)

Yocto variable not defined but set with _ operator

I'm struggling with something I'm not sure to address correctly.
In a Yocto environment (for STM32MP1 by the way) I have to configure a new target.
Hence I added to meta-st/meta-st-stm32mp/conf/machine/include/st-machine-extlinux-config-stm32mp.inc this section, that looks like the other already available:
EXTLINUX_BOOTDEVICE_EMMC = "mmc1"
EXTLINUX_BOOTDEVICE_SDCARD = "mmc0"
EXTLINUX_ROOT_EMMC = "${#bb.utils.contains('ST_VENDORFS','1','root=/dev/mmcblk1p4','root=/dev/mmcblk1p3',d)}"
EXTLINUX_ROOT_NAND = "ubi.mtd=UBI rootfstype=ubifs root=ubi0:rootfs"
# Define available targets to use
UBOOT_EXTLINUX_CONFIGURED_TARGETS += "mp151a_sdcard"
UBOOT_EXTLINUX_CONFIGURED_TARGETS += "mp151a_emmc"
# Define bootprefix for each target
UBOOT_EXTLINUX_BOOTPREFIXES_mp151a_sdcard = "${EXTLINUX_BOOTDEVICE_SDCARD}_stm32mp151a_"
UBOOT_EXTLINUX_BOOTPREFIXES_mp151a_emcc = "${EXTLINUX_BOOTDEVICE_EMCC}_stm32mp151a_"
# Define labels for each target
UBOOT_EXTLINUX_LABELS_mp151a_sdcard = "stm32mp151a-sdcard"
UBOOT_EXTLINUX_LABELS_mp151a_emcc = "stm32mp151a-emcc"
# Define default boot config for each target
UBOOT_EXTLINUX_DEFAULT_LABEL_mp151a_sdcard ?= "stm32mp151a-sdcard"
UBOOT_EXTLINUX_DEFAULT_LABEL_mp151a_emcc ?= "stm32mp151a-emcc"
# Define FDT overrides for all labels
UBOOT_EXTLINUX_FDT_stm32mp151a-sdcard = "/stm32mp151a.dtb"
UBOOT_EXTLINUX_FDT_stm32mp151a-emcc = "/stm32mp151a.dtb"
# Define ROOT overrides for all labels
UBOOT_EXTLINUX_ROOT_stm32mp151a-sdcard = "${EXTLINUX_ROOT_SDCARD}"
UBOOT_EXTLINUX_ROOT_stm32mp151a-emcc = "${EXTLINUX_ROOT_EMCC}"
But when I bitbake <image> (that includes the file above) I get this output:
DEBUG: Executing python function update_extlinuxconf_targets
NOTE: UBOOT_EXTLINUX_CONFIGURED_TARGETS: mp157a-dk1_sdcard mp157a-dk1_sdcard-optee mp157c-dk2_sdcard mp157c-dk2_sdcard-optee mp157c-ed1_emmc mp157c-ed1_emmc-optee mp157c-ed1_sdcard mp157c-ed1_sdcard-optee mp157c-ev1_emmc mp157c-ev1_emmc-optee mp157c-ev1_nand mp157c-ev1_nor-sdcard mp157c-ev1_nor-emmc mp157c-ev1_sdcard mp157c-ev1_sdcard-optee mp151a_sdcard mp151a_emmc
NOTE: UBOOT_EXTLINUX_CONFIG_FLAGS: emmc sdcard
NOTE: *** Loop for config_label: emmc
NOTE: *** Loop for devicetree: stm32mp151a
NOTE: >>> New target label: mp151a_emmc
NOTE: >>> Append mp151a_emmc to UBOOT_EXTLINUX_TARGETS
NOTE: *** Loop for config_label: sdcard
NOTE: *** Loop for devicetree: stm32mp151a
NOTE: >>> New target label: mp151a_sdcard
NOTE: >>> Append mp151a_sdcard to UBOOT_EXTLINUX_TARGETS
NOTE: >>> UBOOT_EXTLINUX_TARGETS (updated): mp151a_emmc mp151a_sdcard
DEBUG: Python function update_extlinuxconf_targets finished
DEBUG: Executing python function do_create_multiextlinux_config
ERROR: UBOOT_EXTLINUX_ROOT not defined
DEBUG: Python function do_create_multiextlinux_config finished
ERROR: Function failed: do_create_multiextlinux_config
As you can see, the file is actually processed because it added the targets I've defined.
But it doesn't find the UBOOT_EXTLINUX_ROOT even if it's "set" with the _ operator:
UBOOT_EXTLINUX_ROOT_stm32mp151a-sdcard = "${EXTLINUX_ROOT_SDCARD}"
UBOOT_EXTLINUX_ROOT_stm32mp151a-emcc = "${EXTLINUX_ROOT_EMCC}"
I also tried to set the main variable to something like:
UBOOT_EXTLINUX_ROOT = ""
or
UBOOT_EXTLINUX_ROOT = "root=/dev/mmcblk1p4"
to see if it was the problem but it doesn't change nothing.
Is this something related to Yocto itself (I mean, something wrong in my syntax) or it's very specific to the SDK (meta-st) ?
The error above should be raised by this file:
root = localdata.getVar('UBOOT_EXTLINUX_ROOT')
if not root:
bb.fatal('UBOOT_EXTLINUX_ROOT not defined')
UPDATE
I checked the (huge) output of bitbake -e and among other targets I see:
# $UBOOT_EXTLINUX_ROOT [41 operations]
[...]
# "${EXTLINUX_ROOT_NOREMMC}"
# override[stm32mp157c-ev1-m4-examples-sdcard]:set /local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/layers/meta-st/meta-st-stm32mp/conf/machine/include/st-machine-extlinux-config-stm32mp.inc:274
# "${EXTLINUX_ROOT_SDCARD}"
# override[stm32mp157c-ev1-m4-examples-sdcard-optee]:set /local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/layers/meta-st/meta-st-stm32mp/conf/machine/include/st-machine-extlinux-config-stm32mp.inc:275
# "${EXTLINUX_ROOT_SDCARD_OPTEE}"
# override[stm32mp151a-sdcard]:set /local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/layers/meta-st/meta-st-stm32mp/conf/machine/include/st-machine-extlinux-config-stm32mp.inc:296
# "${EXTLINUX_ROOT_SDCARD}"
# override[stm32mp151a-emcc]:set /local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/layers/meta-st/meta-st-stm32mp/conf/machine/include/st-machine-extlinux-config-stm32mp.inc:297
[...]
# pre-expansion value:
# ""
UBOOT_EXTLINUX_ROOT=""
# $UBOOT_EXTLINUX_ROOT_cubemx-nor-sdcard
UBOOT_EXTLINUX_ROOT_cubemx-nor-sdcard="root=/dev/mmcblk0p3"
# $UBOOT_EXTLINUX_ROOT_cubemx-sdcard
UBOOT_EXTLINUX_ROOT_cubemx-sdcard="root=/dev/mmcblk0p6"
# $UBOOT_EXTLINUX_ROOT_stm32mp151a-emcc
UBOOT_EXTLINUX_ROOT_stm32mp151a-emcc="\${EXTLINUX_ROOT_EMCC}"
# $UBOOT_EXTLINUX_ROOT_stm32mp151a-sdcard
UBOOT_EXTLINUX_ROOT_stm32mp151a-sdcard="root=/dev/mmcblk0p6"
So far, if I understand correctly, the override values are correctly assigned (but not the ${EXTLINUX_ROOT_EMCC} - I don't understand where the \ comes from) but the main variable is still empty.
Adding UBOOT_EXTLINUX_ROOT = "root=/dev/mmcblk1p4" at the beginning of the above file, seems to do the trick (even if before I wrote the opposite, perhaps I forgot to clear the cache?) but I don't think it's the right way to do it.
You should specify the wanted name of the machine as a target to build, i.e.:
MACHINE=stm32mp151a-sdcard bitbake <image>
This way, the UBOOT_EXTLINUX_ROOT gets the non-empty value "root=/dev/mmcblk0p6" (from the UBOOT_EXTLINUX_ROOT_stm32mp151a-sdcard variant of the variable).

Annotating a corpus using Syntaxnet

I am trying to annotate a corpus using Syntaxnet. I added the following lines in the end of the /models/syntaxnet/syntaxnet/models/parsey_mcparseface/context.pbtxt file:
input {
name: 'input_file'
record_format: 'english-text'
Part {
file_pattern: '/home/melvyn/text.txt'
}
}
output {
name: 'output_file'
record_format: 'english-text'
Part {
file_pattern: '/home/melvyn/text-tagged.txt'
}
}
When i run the command:
./demo.sh --input=input_file --output=output_file
I am getting:
./demo.sh: line 31: bazel-bin/syntaxnet/parser_eval: No such file or directory
./demo.sh: line 43: bazel-bin/syntaxnet/parser_eval: No such file or directory
./demo.sh: line 55: bazel-bin/syntaxnet/conll2tree: No such file or directory
According to the answer given ## here ## I changed my demo.sh file and now I get some errors which say:
[libprotobuf ERROR external/tf/google/protobuf/src/google/protobuf/text_format.cc:291] Error parsing text-format syntaxnet.TaskSpec: 200:8: Message type "syntaxnet.TaskOutput" has no field named "Part".
E external/tf/tensorflow/core/framework/op_segment.cc:53] Create kernel failed: Invalid argument: Could not parse task context at syntaxnet/models/parsey_mcparseface/context.pbtxt
E external/tf/tensorflow/core/common_runtime/executor.cc:333] Executor failed to create kernel. Invalid argument: Could not parse task context at syntaxnet/models/parsey_mcparseface/context.pbtxt
[[Node: DocumentSource = DocumentSourcebatch_size=32, corpus_name="stdin-conll", task_context="syntaxnet/models/parsey_mcparseface/context.pbtxt", _device="/job:localhost/replica:0/task:0/cpu:0"]]
What could be a possible solution?
Though it's not certain but I think you are not running the shell script from the root directory. Please try running it as per the instructions mentioned here
I hope it helps.

net-snmp perl subagent not being triggered by snmpget

I've been working on a custom SNMP Mib and I've come up against a wall while trying to get an agent to return the proper data.
MIB (validated by running smilint -l 6):
IDB-MIB DEFINITIONS ::= BEGIN
IMPORTS
MODULE-IDENTITY, OBJECT-TYPE, Integer32, enterprises
FROM SNMPv2-SMI
MODULE-COMPLIANCE, OBJECT-GROUP FROM SNMPv2-CONF;
idb MODULE-IDENTITY
LAST-UPDATED "201307300000Z" -- Midnight 30 July 2013
ORGANIZATION "*********"
CONTACT-INFO "email: *******"
DESCRIPTION "description"
REVISION "201307300000Z" -- Midnight 29 July 2013
DESCRIPTION "First Draft"
::= { enterprises 42134 }
iDBCompliance MODULE-COMPLIANCE
STATUS current
DESCRIPTION
"Compliance statement for iDB"
MODULE
GROUP testGroup
DESCRIPTION
"This group is a test group"
::= {idb 1}
test2 OBJECT-TYPE
SYNTAX Integer32
UNITS "tests"
MAX-ACCESS read-write
STATUS current
DESCRIPTION
"A test object"
DEFVAL { 5 }
::= { idb 3 }
testGroup OBJECT-GROUP
OBJECTS {
test2
}
STATUS current
DESCRIPTION "all test objects"
::= { idb 2 }
END
Agent file:
#!/usr/bin/perl
use NetSNMP::OID(':all');
use NetSNMP::agent(':all');
use NetSNMP::ASN(':all');
sub myhandler {
my ($handler, $registration_info, $request_info, $requests) = #_;
print "Handling request\n";
for ($request = $requests; $request; $request = $request->next()) {
#
# Work through the list of varbinds
#
my $oid = $request->getOID();
print "Got request for oid $oi\n";
if ($request_info->getMode() == MODE_GET) {
if ($oid == new NetSNMP::OID($rootOID . ".3")) {
$request->setValue(ASN_INTEGER, 2);
}
}
}
}
{
$subagent = 0;
print "Running new agent\n";
my $rootOID = ".1.3.6.1.4.1.42134";
my $regoid = new NetSNMP::OID($rootOID);
if (!$agent) {
$agent = new NetSNMP::agent('Name'=>'my_agent_name','AgentX' => 1);
$subagent = 1;
print "Starting subagent\n";
}
print "Registering agent\n";
$agent->register("my_agent_name", $regoid, \&myhandler);
print "Agent registered\n";
if ($subagent) {
$SIG{'INT'} = \&shut_it_down;
$SIG{'QUIT'} = \&shut_it_down;
$running = 1;
while ($running) {
$agent->agent_check_and_process(1);
}
$agent->shutdown();
}
}
sub shut_it_down() {
$running = 0;
print "Shutting down agent\n";
}
When I run the agent I get the following:
Running new agent
Starting subagent!
Registering agent with oid idb
Agent registered
So I know that much is working. However when I run the following command:
snmpget -v 1 -c mycommunity localhost:161 test2.0
I get this error message:
Error in packet
Reason: (noSuchName) There is no such variable name in this MIB.
Failed object: IDB-MIB::test2.0
I know from snmptranslate that the mib file is set correctly. I have even looked through the debug for snmpget (using -DALL) to make sure that the mib is being loaded and parsed correctly.
So my question is: Why is my subagent not being passed the request?
Update:
I've been told by #EhevuTov that my MIB file is not valid, however smilint does not report any issues and running snmpget -v 2c -c mycommunity localhost:161 .1.3.6.1.4.1.42134.3.0 does report the NAME of the object (IDB-MIB::test2.0) correctly, but does not find any data for it.
I am getting IDB-MIB::test2 = No Such Object available on this agent at this OID, which makes me think that my agent is not registering properly, however it's not throwing any errors.
Update 2:
I've been fiddling around with the agent code a bit. Based on the CPAN documentation (http://metacpan.org/pod/NetSNMP::agent), it looks like the $agent->register function call is supposed to return 0 if successful. So I checked the return code and got this:
Agent registered. Result: NetSNMP::agent::netsnmp_handler_registration=SCALAR(0x201e688)
Printing it out using Data::Dumper results in:
$VAR1 = bless( do{\(my $o = 34434624)}, 'NetSNMP::agent::netsnmp_handler_registration' );
I vaguely understand what bless does, but even so, I have no idea what this result is supposed to mean. So I'm starting to think that the agent is wrong somehow. Does anyone know how to debug these agents? Is there somewhere I can look to see if it's getting loaded properly into the master snmpd?
And I've solved the problem. It wasn't with the MIB, it was with the agent (which I had THOUGHT was working fine the whole time so I never bothered to check it).
I'd been running the agent stand-alone, because it seemed like it was working fine (never threw any errors when registering the handler). Apparently though, it needs to be run directly by snmpd.
I moved it to a directory that snmpd can access (because also apparently snmpd can't run scripts from /root, even though it's running as root), and added these lines in snmpd.conf:
perl print "\nRunning agents now\n";
perl do "/usr/share/snmp/agent.pl" || print "Problem running agent script: $!\n";
perl print "Agents run\n";
Note that these two lines were already present:
disablePerl false
perlInitFile /usr/share/snmp/snmp_perl.pl
I can now run the snmpget command and get the expected response.
> snmpget -v 2c -c mycommunity localhost:161 .1.3.6.1.4.1.42134.3
IDB-MIB::test2 = INTEGER: 2 tests