powershell query to extract specific string from text file and export that in excel - powershell

need to extract the string " VQ_UHC_GovtPrograms_Sales_LeadCapture_Mandarin_Callback#VQ-switch" from below log file" when the condition is "is not found on Switch" error is found in the log.
=================================================
So the output should be:
QueueName Date
==================================================
2022-08-07 20:36:41.534 ERROR 11100 --- [com.test.PCT.invoker.default] split.logs.queue-stat : Stat request failed for Stat Server: statserver-test-rrr_1a StatName: 'AverAbandCallTime' QueueName: 'VQ_UHC_GovtPrograms_Sales_LeadCapture_Mandarin_Callback#VQ-switch' Reason: 'Queue 'VQ_UHC_GovtPrograms_Sales_LeadCapture_Mandarin_Callback' is not found on Switch 'VQ-switch' (Tenant 'Environment')'
=======

Related

ElastAlert2 No mapping found

I'm trying set ElastAlert for Opensearch 2.8.
I Write config
# This is the folder that contains the rule yaml files
# Any .yaml file will be loaded as a rule
rules_folder: /etc/elastalert/rules
# How often ElastAlert will query Elasticsearch
# The unit can be anything from weeks to seconds
run_every:
minutes: 1
# ElastAlert will buffer results from the most recent
# period of time, in case some log sources are not in real time
buffer_time:
minutes: 15
# The Elasticsearch hostname for metadata writeback
# Note that every rule can have its own Elasticsearch host
es_host: localhost
# The Elasticsearch port
es_port: 9200
# The AWS region to use. Set this when using AWS-managed elasticsearch
#aws_region: us-east-1
# The AWS profile to use. Use this if you are using an aws-cli profile.
# See http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
# for details
#profile: test
# Optional URL prefix for Elasticsearch
#es_url_prefix: elasticsearch
# Connect with TLS to Elasticsearch
use_ssl: True
# GET request with body is the default option for Elasticsearch.
# If it fails for some reason, you can pass 'GET', 'POST' or 'source'.
# See http://elasticsearch-py.readthedocs.io/en/master/connection.html?highlight=send_get_body_as#transport
# for details
# es_send_get_body_as: GET
# Option basic-auth username and password for Elasticsearch
es_username: admin
es_password: password
# Use SSL authentication with client certificates client_cert must be
# a pem file containing both cert and key for client
verify_certs: False
#ca_certs: /path/to/cacert.pem
#client_cert: /path/to/client_cert.pem
#client_key: /path/to/client_key.key
# The index on es_host which is used for metadata storage
# This can be a unmapped index, but it is recommended that you run
# elastalert-create-index to set a mapping
writeback_index: elastalert_status
writeback_alias: elastalert_alerts
# If an alert fails for some reason, ElastAlert will retry
# sending the alert until this time period has elapsed
alert_time_limit:
days: 2
... And rule file
# Alert when the rate of events exceeds a threshold
.
# (Optional)
# Elasticsearch host
es_host: localhost
.
# (Optional)
# Elasticsearch port
es_port: 9200
.
# (OptionaL) Connect with SSL to Elasticsearch
use_ssl: True
ssl_show_warn: False
verify_certs: False
.
# (Optional) basic-auth username and password for Elasticsearch
# es_username: admin
# es_password: ytnhfvgkby
.
# (Required)
# Rule name, must be unique
name: Loopdetect
.
# (Required)
# Type of alert.
# the frequency rule type alerts when num_events events occur with timeframe time
type: any
.
# (Required)
# Index to search, wildcard supported
index: syslog-20221104
.
# (Required, frequency specific)
# Alert when this many documents matching the query occur within a timeframe
num_events: 1
.
# (Required, frequency specific)
# num_events must occur within this amount of time to trigger an alert
timeframe:
hours: 24
.
# (Required)
# A list of Elasticsearch filters used for find events
# These filters are joined with AND and nested in a filtered query
# For more info: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl.html
# filter:
# - term:
# process.name: "JUSTME"
filter:
- query:
query_string:
query: "message: *loop*"
# (Required)
# The alert is use when a match is found
alert:
- "email"
.
# (required, email specific)
# a list of email addresses to send alerts to
email:
- "myemail"
But when I try check this rule.
I get error:
elastalert-test-rule rules/loopdetect_alert.yaml
INFO:elastalert:Note: In debug mode, alerts will be logged to console but NOT actually sent.
To send them but remain verbose, use --verbose instead.
WARNING:elasticsearch:POST https://localhost:9200/syslog-20221104/_search?ignore_unavailable=true&size=1 [status:400 request:0.048s]
Error running your filter:
RequestError(400, 'search_phase_execution_exception', {'error': {'root_cause': [{'type': 'query_shard_exception', 'reason': 'No mapping found for [#timestamp] in order to sort on', 'index': 'syslog-20221104', 'index_uuid': 'BG6MQmmYRUyLBY3tEFykEQ'}], 'type': 'search_phase_execution_exception', 'reason': 'all shards failed', 'phase': 'query', 'grouped': True, 'failed_shards': [{'shard': 0, 'index': 'syslog-20221104', 'node': '5spTsU7-QienT8Jn064MMA', 'reason': {'type': 'query_shard_exception', 'reason': 'No mapping found for [#timestamp] in order to sort on', 'index': 'syslog-20221104', 'index_uuid': 'BG6MQmmYRUyLBY3tEFykEQ'}}]}, 'status': 400})
INFO:elastalert:Note: In debug mode, alerts will be logged to console but NOT actually sent.
To send them but remain verbose, use --verbose instead.
INFO:elastalert:1 rules loaded
INFO:apscheduler.scheduler:Adding job tentatively -- it will be properly scheduled when the scheduler starts
WARNING:elasticsearch:POST https://localhost:9200/syslog-20221104/_search?_source_includes=%40timestamp%2C%2A&ignore_unavailable=true&scroll=30s&size=10000 [status:400 request:0.039s]
ERROR:elastalert:Error running query: RequestError(400, 'search_phase_execution_exception', 'No mapping found for [#timestamp] in order to sort on')
{"writeback": {"elastalert_error": {"message": "Error running query: RequestError(400, 'search_phase_execution_exception', 'No mapping found for [#timestamp] in order to sort on')", "traceback": ["Traceback (most recent call last):", " File \"/usr/local/lib/python3.11/dist-packages/elastalert2-2.8.0-py3.11.egg/elastalert/elastalert.py\", line 370, in get_hits", " res = self.thread_data.current_es.search(", " ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/client/utils.py\", line 152, in _wrapped", " return func(*args, params=params, headers=headers, **kwargs)", " ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/client/__init__.py\", line 1658, in search", " return self.transport.perform_request(", " ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/transport.py\", line 392, in perform_request", " raise e", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/transport.py\", line 358, in perform_request", " status, headers_response, data = connection.perform_request(", " ^^^^^^^^^^^^^^^^^^^^^^^^^^^", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/connection/http_requests.py\", line 199, in perform_request", " self._raise_error(response.status_code, raw_data)", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/connection/base.py\", line 315, in _raise_error", " raise HTTP_EXCEPTIONS.get(status_code, TransportError)(", "elasticsearch.exceptions.RequestError: RequestError(400, 'search_phase_execution_exception', 'No mapping found for [#timestamp] in order to sort on')"], "data": {"rule": "Loopdetect", "query": {"query": {"bool": {"filter": {"bool": {"must": [{"range": {"#timestamp": {"gt": "2022-11-03T12:12:39.618168Z", "lte": "2022-11-03T12:27:39.618168Z"}}}, {"query_string": {"query": "message: *loop*"}}]}}}}, "sort": [{"#timestamp": {"order": "asc"}}]}}}}}
But if I try get data by CURL, it's ok
curl -X GET 'https://localhost:9200/syslog-20221104/_search?ignore_unavailable=true&size=1' -u 'admin:password' --insecure
{"took":4,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":10000,"relation":"gte"},"max_score":1.0,"hits":[{"_index":"syslog-20221104","_id":"_bSKQYQB_cpiH2g_hgvj","_score":1.0,"_source":{"host":"10.53.0.35","hostname":"10.53.0.35","message":"Port 2 link up, 100Mbps FULL duplex","source_ip":"91.195.230.4","source_type":"syslog","timestamp":"2022-11-04T07:28:27Z"}}]}}
Help me please understand, what I do wrong.
Thanks.
I add timestamp_field: timestamp.
And all work fine!

Parameter name: pUnk ---> System.ArgumentNullException: Value cannot be null

When i run my powershell script i occasionally get this error even though my parameter is not empty. After doing some research i can see that "pUnk" is a common errors but i am still not sure why my script throws this error.
: Line,char: 108,40 - Details: Microsoft.ConfigurationManagement.ManagementProvider.SmsConnectionException: Value cannot be null. Parameter name: pUnk ---> System.ArgumentNullException: Value cannot be null. Parameter name: pUnk at System.Runtime.InteropServices.Marshal.GetObjectForIUnknown(IntPtr pUnk) at System.Management.ManagementObject.Put(PutOptions options) at Microsoft.ConfigurationManagement.ManagementProvider.WqlQueryEngine.WqlResultObject.Put(ReportProgress progressReport) --- End of inner exception stack trace --- at Microsoft.ConfigurationManagement.ManagementProvider.WqlQueryEngine.WqlResultObject.Put(ReportProgress progressReport) at Microsoft.ConfigurationManagement.PowerShell.Framework.CMPSCmdlet.PutObject(IResultObject resultObject, Boolean refresh) at Microsoft.ConfigurationManagement.PowerShell.Cmdlets.Collections.SetCollection.ProcessRecordEx() at Microsoft.ConfigurationManagement.PowerShell.Framework.CMPSCmdlet.ProcessRecord()
As i run code against SCCM it's allways this line that throws the error msg:
`
$CurrentCollectionName | Set-CMDeviceCollection -newname $NewCollectionName
`
Any help would be appreciated

GFSH get and powershell

I'd like to run a get from a powershell script like this.
$GfshCommand = '"get --key=number --region=admin"'
$No = .\gfsh.bat -e "connect --locator=1.2.3.4[10334]" -e $GfshCommand
The output that I get from running this is
(1) Executing - connect --locator=1.2.3.4[10334]
Connecting to Locator at [host=1.2.3.4, port=10334] ..
Connecting to Manager at [host=ME, port=1099] ..
Successfully connected to: [host=ME, port=1099]
You are connected to a cluster of version: 1.14.2
(2) Executing - get --key=number --region=admin
Result : true
Key Class : java.lang.String
Key : number
Value Class : java.lang.String
Value : "1234"
So I am looking for the get result to be 1234
Is there already a way I can retrieve it via powershell?
Like without string manipulation.

How can I use '--extra-vars' for replicas in Ansible playbooks?

I am trying to set a default value of 1 replicas for pod deployment but also I would like to have the option to change the value by using --extra-vars="pod_replicas=2". I have tried the following but it doesn't work for me.
vars:
- pod_replicas: 1
spec:
replicas: "{{ pod_replicas }}"
ERROR:
TASK [Create a deployment]
fatal: [localhost]: FAILED! => {"changed": false, "error": 422, "msg": "Failed to patch object: b'{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"m essage\":\" \\\\\"\\\\\" is invalid: patch: Invalid value: \\\\\"{\\\\\\\\\\\\\"apiVersion\\\\\\\\\\\\\":\\\\\\\\\\\\\"apps/v1\\\\\\\\\\\\\",\\\\\\\\\\\\\"kind\\\\\\\\\\\\\":\\\\\\\\\ \\\\"Deployment\\\\\\\\\\\\\",\\\\\\\\\\\\\"metadata\\\\\\\\\\\\\":{\\\\\\\\\\\\\"annotations\\\\\\\\\\\\\":{\\\\\\\\\\\\\"deployment.kubernetes.io/revision\\\\\\\\\\\\\":\\\\\\\\\\\\ \"1\\\\\\\\\\\\\"},\\\\\\\\\\\\\
(...)
\\"2022-02-14T12:13:38Z\\\\\\\\\\\\\",\\\\\\\\\\\\\"lastTransitionTime\\\\\\\\\\\\\":\\\\\\\\\\\\\"2022-02-14T12:13:33Z\\\\\\\\\\\\\",\\\\\\\\\\\\\"reason\\\\\\\\\\\\\":\\\\\\\\\\ \\\"NewReplicaSetAvailable\\\\\\\\\\\\\",\\\\\\\\\\\\\"message\\\\\\\\\\\\\":\\\\\\\\\\\\\"ReplicaSet \\\\\\\\\\\\\\\\\\\\\\\\\\\\\"ovms-deployment-57c9bbdfb8\\\\\\\\\\\\\\\\\\\\\\\\\ \\\\" has successfully progressed.\\\\\\\\\\\\\"},{\\\\\\\\\\\\\"type\\\\\\\\\\\\\":\\\\\\\\\\\\\"Available\\\\\\\\\\\\\",\\\\\\\\\\\\\"status\\\\\\\\\\\\\":\\\\\\\\\\\\\"True\\\\\\\\ \\\\\",\\\\\\\\\\\\\"lastUpdateTime\\\\\\\\\\\\\":\\\\\\\\\\\\\"2022-02-14T14:18:33Z\\\\\\\\\\\\\",\\\\\\\\\\\\\"lastTransitionTime\\\\\\\\\\\\\":\\\\\\\\\\\\\"2022-02-14T14:18:33Z\\\ \\\\\\\\\\",\\\\\\\\\\\\\"reason\\\\\\\\\\\\\":\\\\\\\\\\\\\"MinimumReplicasAvailable\\\\\\\\\\\\\",\\\\\\\\\\\\\"message\\\\\\\\\\\\\":\\\\\\\\\\\\\"Deployment has minimum availabili ty.\\\\\\\\\\\\\"}]}}\\\\\": v1.Deployment.Spec: v1.DeploymentSpec.Replicas: readUint32: unexpected character: \\\\ufffd, error found in #10 byte of ...|eplicas\\\\\":\\\\\"1\\\\\",\\ \\\"revisi|..., bigger context ...|\\\\\"spec\\\\\":{\\\\\"progressDeadlineSeconds\\\\\":600,\\\\\"replicas\\\\\":\\\\\"1\\\\\",\\\\\"revisionHistoryLimit\\\\\":10,\\\\\"selector\\\\\ ":{\\\\\"matchLab|...\",\"field\":\"patch\"}]},\"code\":422}\\n'", "reason": "Unprocessable Entity", "status": 422}
Any idea how I can fix this?? Thank you!
Regarding your question
How can I use --extra-vars in Ansible playbooks?
you may have a look into Understanding variable precedence, Using -e extra variables at the command line and the following small test setup
---
- hosts: localhost
become: false
gather_facts: false
vars:
REPLICAS: 1
tasks:
- name: Show value
debug:
msg: "{{ REPLICAS }} in {{ REPLICAS | type_debug }}"
which will for a run with
ansible-playbook vars.yml
result into an output of
TASK [Show value] ******
ok: [localhost] =>
msg: 1 in int
and for a run with
ansible-playbook --extra-vars="REPLICAS=2" vars.yml
into an output of
TASK [Show value] ******
ok: [localhost] =>
msg: 2 in unicode
Because of the error message
v1.Deployment.Spec: v1.DeploymentSpec.Replicas: readUint32: unexpected character: \\\\ufffd, error found in #10 byte of ...|eplicas\\\\\":\\\\\"1\\\\\"
I've introduced the type_debug filter. Maybe it will be necessary to cast the data type to integer.
- name: Show value
debug:
msg: "{{ REPLICAS }} in {{ REPLICAS | int | type_debug }}"
Further Occurences
When I've been tying numeric values from a variable file, they've been resolved as string not numbers
I have found a solution. Using a json object as an argumet seems to work:
ansible-playbook --extra-vars '{ "pod_replicas":2 }' <playbook>.yaml

QuotaExceededError with pouchdb

I have a big error with pouchDB communicating to my Cloudant database in a angular/ionic app.
Can you please help me figure out how to fix this ?
POST https://louisromain.cloudant.com/boardline_users/_bulk_get?revs=true&attachments=true&_nonce=1446478625328 400 (Bad Request)
pouchdb.min.js:8 Database has a global failure DOMError {}message: ""name: "QuotaExceededError"__proto__: DOMErrora.8.G.onsuccess.H.onabort # pouchdb.min.js:8
ionic.bundle.min.js:139 o {status: 500, name: "abort", message: "unknown", error: true, reason: "QuotaExceededError"}error: truemessage: "unknown"name: "abort"reason: "QuotaExceededError"result: Objectdoc_write_failures: 1docs_read: 1docs_written: 0end_time: Mon Nov 02 2015 16:37:05 GMT+0100 (CET)errors: Array[1]last_seq: "3478-g1AAAAFJeJzLYWBgYMlgTmGQT0lKzi9KdUhJMtXLSs1LLUst0kvOyS9NScwr0ctLLckBKmRKZEiy____f1YGUxIDA3N6LlCMPdXM1MzEMo1oM5IcgGRSPcKYcLAxKZYGlslpSajGmOA2Jo8FSDI0ACmgSftRXJSSamFoYWmOapQ5IaMOQIwCuooZZFQhxHPmJkCURtigLAAxFGUZ"ok: falsestart_time: Mon Nov 02 2015 16:36:59 GMT+0100 (CET)status: "aborting"__proto__: Objectstatus: 500__proto__: r(anonymous function) # ionic.bundle.min.js:139b.$get # ionic.bundle.min.js:111(anonymous function) # ionic.bundle.min.js:151a.$get.n.$eval # ionic.bundle.min.js:165a.$get.n.$digest # ionic.bundle.min.js:163(anonymous function) # ionic.bundle.min.js:166e # ionic.bundle.min.js:74(anonymous function) # ionic.bundle.min.js:76
11ionic.bundle.min.js:139 Error: Failed to execute 'transaction' on 'IDBDatabase': The database connection is closing.
at Error (native)
at a.9.n.openTransactionSafely (http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:8:9233)
at i.a.8.e._getLocal (http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:8:2521)
at i.<anonymous> (http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:7:6737)
at i.<anonymous> (http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:10:28092)
at i.a.90.t.exports (http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:10:28931)
at http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:9:28802
at i.<anonymous> (http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:9:28722)
at i.a.90.t.exports [as get] (http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:10:28931)
at i.angular.module.constant.service.$q.qify [as get] (http://localhost:8101/lib/angular-pouchdb/angular-pouchdb.js:35:27)(anonymous function) # ionic.bundle.min.js:139b.$get # ionic.bundle.min.js:111(anonymous function) # ionic.bundle.min.js:151a.$get.n.$eval # ionic.bundle.min.js:165a.$get.n.$digest # ionic.bundle.min.js:163(anonymous function) # ionic.bundle.min.js:166e # ionic.bundle.min.js:74(anonymous function) # ionic.bundle.min.js:76
The error is that the device has run out of space. Unfortunately this is an error thrown by IndexedDB itself when the device is too low on storage, so there's nothing you can do about it except to use less space. PouchDB's compact() can help; there's also the transform-pouch plugin if you want to just reduce the size of your documents.