Issues with Rsyslog and its Configuration File - redhat

I am having issues with configuration file for rsyslog. Because I am not sure of how the messages from various devices will be sent I look for either the hostname or IP to sort it into its appropriate file. The issues is that extra falsa data will be added into the appropriate file. I have changed how the data is coming in and the files but it follows the same structure. Here is a section of the code:
If $hostname=="8.8.8.8" then /directory/cal-dns-01.log
If $hostname startswith_i 'Google_DNS' then /directory/cal-dns-01.log
If I were to use this command on the file:
awk -F" " {'a[$4]++ } END { for (b in a) { print b } }' /directory/cal-dns-01.log
I get this as the result: g, G, Go, Google_DNS, Google_DNS.Google.Company
I am looking so that only the Google_DNS, and Google_DNS.Google.Company are inserted into the file.

Related

How to copy specific headers in Soong

How can I extract specific files in Soong to use as headers?
I was recently writing a blueprint (Android.bp) file for paho-mqtt-c. My consumers require MQTTClient.h which paho-mqtt-c stores in src/ - what I would consider a "private" location. Reading their CMakeLists.txt file, it actually installs this and some other headers to include/.
As far as I can tell, Soong doesn't have this concept of installing so it seems like I could export_include_dirs the src directory - which seems wrong, or use a cc_genrule to copy these headers elsewhere.
But that's where I hit another issue: I can't seem to figure out how to create a cc_genrule that takes n inputs and writes n outputs (n-to-n). i.e.
cc_genrule {
name: "paho_public_headers",
cmd: "cp $(in) $(out)",
srcs: [ "src/MQTTAsync.h", "src/MQTTClient.h", "src/MQTTClientPersistence.h", "src/MQTTLogLevels.h" ]
out: [ "public/MQTTAsync.h", "public/MQTTClient.h", "public/MQTTClientPersistence.h", "public/MQTTLogLevels.h" ],
}
results in the failed command cp <all-inputs> <all-outputs>, rather than what I wanted which would be closer to iterating the command over each input/output pairs.
My solution was simply to write four cc_genrules, but that doesn't seem great either.
Is there a better way? (ideally without writing a custom tool)
The solution was to use gensrcs with a shard_size of 1.
For example:
gensrcs {
name: "paho_public_headers",
cmd: "mkdir -p $(genDir) && cat $(in) > $(out)",
srcs: [":paho_mqtt_c_header_files"],
output_extension: "h",
shard_size: 1,
export_include_dirs: ["src"],
}
The export_include_dirs is important, as with gensrcs there is little control over the output filename other than the extension, so it's easiest to keep the directory structure $(in) on the $(out) files.

How to change the metadata of all specific file of exist objects in Google Cloud Storage?

I have uploaded thousands of files to google storage, and i found out all the files miss content-type,so that my website cannot get it right.
i wonder if i can set some kind of policy like changing all the files content-type at the same time, for example, i have bunch of .html files inside the bucket
a/b/index.html
a/c/a.html
a/c/a/b.html
a/a.html
.
.
.
is that possible to set the content-type of all the .html files with one command in the different place?
You could do:
gsutil -m setmeta -h Content-Type:text/html gs://your-bucket/**.html
There's no a unique command to achieve the behavior you are looking for (one command to edit all the object's metadata) however, there's a command from gcloud to edit the metadata which you could use on a bash script to make a loop through all the objects inside the bucket.
1.- Option (1) is to use a the gcloud command "setmeta" on a bash script:
# kinda pseudo code here.
# get the list with all your object's names and iterate over the metadata edition command.
for OUTPUT in $(get_list_of_objects_names)
do
gsutil setmeta -h "[METADATA_KEY]:[METADATA_VALUE]" gs://[BUCKET_NAME]/[OBJECT_NAME]
# the "gs://[BUCKET_NAME]/[OBJECT_NAME]" would be your object name.
done
2.- You could also create a C++ script to achieve the same thing:
namespace gcs = google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string bucket_name, std::string object_name,
std::string key, std::string value) {
# you would need to find list all the objects, while on the loop, you can edit the metadata of the object.
for (auto&& object_metadata : client.ListObjects(bucket_name)) {
string bucket_name=object_metadata->bucket(), object_name=object_metadata->name();
StatusOr<gcs::ObjectMetadata> object_metadata =
client.GetObjectMetadata(bucket_name, object_name);
gcs::ObjectMetadata desired = *object_metadata;
desired.mutable_metadata().emplace(key, value);
StatusOr<gcs::ObjectMetadata> updated =
client.UpdateObject(bucket_name, object_name, desired,
gcs::Generation(object_metadata->generation()))
}
}

How to parse Mitmdump/Mitmproxy content

I am using mitmdump -dd > outfile to parse content,which gives me the complete request and response "headers and its body content"(which also removes the junk part of the traffic i.e no certificate's and no compressed data).
But this is making my file really large.How can I just get only the request part of the traffic....
Any advice or link how can this be done??
Thanks
The way i handled not exact but similar case is using -s flag of mitmdump and get the flow.request.content in my script and log it to some log file that will be pretty neat and clean
Here below code might be of some help ( save as t.py and run mitmdump -s t.py )
from mitmproxy import http
import time,re
import logging
def response(flow: http.HTTPFlow) -> None:
flow.response.headers["newheader"] = "foo"
def request(flow: http.HTTPFlow) -> None:
print(flow.request.content)
request_content = flow.request.content
# here u get the request content and then log it and use it

unoconv fails to save in my specified directory

I am using unoconv to convert an ods spreadsheet to a csv file.
Here is the command:
unoconv -vvv --doctype=spreadsheet --format=csv --output= ~/Dropbox
/mariners_site/textFiles/expenses.csv ~/Dropbox/Aldeburgh/expenses
/expenses.ods
It saves the output file in the same directory as the source file, not in the specified directory. The error message is:
Output file: /home/richard/Dropbox/mariners_site/textFiles/expenses.csv
unoconv: UnoException during export phase:
Unable to store document to file:///home/richard/Dropbox/mariners_site
/textFiles/expenses.csv (ErrCode 19468)
I'm sure that this worked initially, but it has since stopped.
I have checked for permissions and they are identical for both directories.
I translated ErrCode 19468 for you and it boils down to meaning ERRCODE_SFX_DOCUMENTREADONLY.
You can find more information about the specific meaning of LibreOffice ErrCode numbers from the unoconv documentation at: https://github.com/dagwieers/unoconv/blob/master/doc/errcode.adoc
The clue here is that you have a whitespace-character between --output= and the filename (--output= ~/Dropbox
/mariners_site/textFiles/expenses.csv) and because of that unoconv gets an empty output value (which means the current directory) and is given 2 files. And that explains why you get this specific error IMO

Where / how / in what context is ipython's configuration file executed?

Where is ipython's configuration file, which starts with c = get_config(), executed? I'm asking because I want to understand what order things are done in ipython, e.g. why certain commands will not work if included as c.InteractiveShellApp.exec_lines.
This is related to my other question, Log IPython output?, because I want access to a logger attribute, but I can't figure out how to access it in the configuration file, and by the time exec_lines are run, the logger has already started (it's too late).
EDIT: I've accepted a solution based on using a startup file in ipython0.12+. Here is my implementation of that solution:
from time import strftime
import os.path
ip = get_ipython()
#ldir = ip.profile_dir.log_dir
ldir = os.getcwd()
fname = 'ipython_log_' + strftime('%Y-%m-%d') + ".py"
filename = os.path.join(ldir, fname)
notnew = os.path.exists(filename)
try:
ip.magic('logstart -o %s append' % filename)
if notnew:
ip.logger.log_write( u"########################################################\n" )
else:
ip.logger.log_write( u"#!/usr/bin/env python\n" )
ip.logger.log_write( u"# " + fname + "\n" )
ip.logger.log_write( u"# IPython automatic logging file\n" )
ip.logger.log_write( u"# " + '# Started Logging At: '+ strftime('%Y-%m-%d %H:%M:%S\n') )
ip.logger.log_write( u"########################################################\n" )
print " Logging to "+filename
except RuntimeError:
print " Already logging to "+ip.logger.logfname
There are only two subtle differences from the proposed solution linked:
1. saves log to cwd instead of some log directory (though I like that more...)
2. ip.magic_logstart doesn't seem to exist, instead one should use ip.magic('logstart')
The config system sets up a special namespace containing the get_config() function, runs the config file, and collects the values to apply them to the objects as they're created. Referring to your previous question, it doesn't look like there's a configuration value for logging output. You may want to start logging yourself after config, when you can control it more precisely. See this example of starting logging automatically.
Your other question mentions that you're limited to 0.10.2 on one system: that has a completely different config system that won't even look at the same file.