grep command in Solaris x86 to Capture Special character - solaris

I want to capture perticular string i.e. "UPDATE kplustp..Service SET Service_Name = "PositionService", ServiceType = 'Z', HostName = "abcd" " in log file with perticular path /home/abc/xyz.log
I am using below command to do
grep -i "UPDATE kplustp..Service SET Service_Name = "PositionService", ServiceType = 'Z'" /home/adc/xyz.log
But it is not working, may be because of special characters in string.
How to do?

Don't have my solaris box at hand, but try this:
grep -i 'UPDATE kplustp..Service SET Service_Name = "PositionService", ServiceType = \'Z\'' /home/adc/xyz.log

Related

Redis Mass Insertion from Postgresql file

Hello I am trying to migrate from Mysql to Postgresql.
I have an SQL file which queries some records and I want to put this in Redis with mass insert.
In Mysql it was working below this sample command;
sudo mysql -h $DB_HOST -u $DB_USERNAME -p$DB_PASSWORD $DB_DATABASE --skip-column-names --raw < test.sql | redis-cli --pipe
I figured out test.sql file for Postgresql syntax.
SELECT
'*3\r\n' ||
'$' || length(redis_cmd::text) || '\r\n' || redis_cmd::text || '\r\n' ||
'$' || length(redis_key::text) || '\r\n' || redis_key::text || '\r\n' ||
'$' || length(sval::text) || '\r\n' || sval::text || '\r\n'
FROM (
SELECT
'SET' as redis_cmd,
'ddi_lemmas:' || id::text AS redis_key,
lemma AS sval
FROM ddi_lemmas
) AS t
and its one output like
"*3\r\n$3\r\nSET\r\n$11\r\nddi_lemmas:1\r\n$22\r\nabil+abil+neg+aor+pagr\r\n"
But I couldn't find any example like Mysql command piping from command line.
There are some examples that have two stages not directly (first insert to a txt file and then put it in Redis)
sudo PGPASSWORD=$PASSWORD psql -U $USERNAME -h $HOSTNAME -d $DATABASE -f test.sql > data.txt
Above command working but with column names which i dont want.
I am trying to find directly send output of Postgresql result to Redis.
Could you help me please?
Solution:
If I want to insert with RESP commands from a sql file. (with the help of #teppic )
echo -e "$(psql -U $USERNAME -h $HOSTNAME -d $DATABASE -AEt -f test.sql)" | redis-cli --pipe
From the psql man page, -t will "Turn off printing of column names and result row count footers, etc."
-A turns off alignment, and -q sets "quiet" mode.
It looks like you're outputting RESP commands, in which case you'll have to use the escaped string format to get the newline/carriage return pairs, e.g. E'*3\r\n' (note the E).
It might be simpler to pipe SET commands to redis-cli:
psql -At -c "SELECT 'SET ddi_lemmas:' || id :: TEXT || ' ' || lemma FROM ddi_lemmas" | redis-cli

Why does psql not recognise my single quotes?

$ psql -E --host=xxx --port=yyy --username=chi --dbname=C_DB -c 'DELETE FROM "Stock_Profile" WHERE "Symbol" = 'MSFT'; '
ERROR: column "msft" does not exist
LINE 1: DELETE FROM "Stock_Profile" WHERE "Symbol" = MSFT;
How do I show psql that MSFT is a string?
It does not like 'MSFT', \'MSFT\' or ''MSFT''
The problem you have is that you've run out of types of quote mark to nest; breaking apart, we have:
your shell needs to pass a single string to the psql command; this can be either single quotes or double quotes
your table name is mixed case so needs to be double quoted
your string needs to be single quoted
In the example you give:
psql -E --host=xxx --port=yyy --username=chi --dbname=C_DB -c 'DELETE FROM "Stock_Profile" WHERE "Symbol" = 'MSFT'; '
The shell sees two single-quoted strings:
'DELETE FROM "Stock_Profile" WHERE "Symbol" = '
`'; '
So the problem is not in psql, but in the shell itself.
Depending on what shell you are using, single-quoted strings probably don't accept any escapes (so \' doesn't help) but double-quoted strings probably do. You could therefore try using double-quotes on the outer query, and escaping them around the table name:
psql -E --host=xxx --port=yyy --username=chi --dbname=C_DB -c "DELETE FROM \"Stock_Profile\" WHERE \"Symbol\" = 'MSFT'; "
Now the \" won't end the string, so the shell will see this as a single string:
"DELETE FROM \"Stock_Profile\" WHERE \"Symbol\" = 'MSFT'; "
and pass it into psql with the escapes processed, resulting in the desired SQL:
DELETE FROM "Stock_Profile" WHERE "Symbol" = 'MSFT';
It's because the single quote before MSFT terminates the string as far as psql is concerned.
As #imsop points out case sensitivity is not preserved when removing double quotes from table names and column names so you can escape the double quotes with backward slash (\) when this is required.
psql -E --host=xxx --port=yyy --username=chi --dbname=C_DB -c "DELETE FROM \"Stock_Profile\" WHERE \"Symbol\" = 'MSFT';"

How to add hash character search to sphinx

I have done the following updates to sphinx to include a hash character in my search to no avail.
config file:
source MY_SOURCE{
...
sql_qudery_pre = SET CHARACTER_SET_RESULTS=utf8
sql_query_pre = SET NAMES utf8
}
index MY_INDEX {
path = C:\Sphinx\data\MY_INDEX
...
charset_type = utf-8
charset_table = 0..9, A..Z->a..z, a..z, +, #, U+002E
}
I then run indexer --rotate --all. Please not that Sphinx is running as a Window's service.
When I run the following query, I get no results:
SELECT count(*) FROM MY_INDEX WHERE Match("#");
Can someone please look at this info and let me know what I am doing incorrectly?
Thank you!
A single # in the config file signifies a comment, and so not seen, need to encode it U+0023.

skipping non-plain index rt (sphinx 2.1.6)

There is the question. Sphinx, version 2.1.6. I used to rt(real time) index, but when indexing display message in koncole:
using config file 'sphinx.conf'...
skipping non-plain index 'rt'...
But at a connection to sphinxbase and write query mysql> desc rt - displays:
+------------+--------+
| Field | Type |
+------------+--------+
| id | bigint |
| id | field |
| first_name | field |
| last_name | field |
+------------+--------+
This is default data?? They do not meet my request. How to work with index rt?
Sphinx.conf.
source database
{
type = mysql
sql_host = 127.0.0.1
sql_user = test
sql_pass = test
sql_db = community
sql_port = 3306
mysql_connect_flags = 32 # enable compression
sql_query_pre = SET NAMES utf8
sql_query_pre = SET SESSION query_cache_type=OFF
}
source rt : database
{
sql_query_range = SELECT MIN(id),MAX(id) FROM mbt_accounts
sql_query = SELECT id AS 'accountId', first_name AS 'fname', last_name AS 'lname' FROM mbt_accounts WHERE id >= 0 AND id<= 1000
sql_range_step = 1000
sql_ranged_throttle = 1000 # milliseconds
}
index rt
{
source = rt
type = rt
path = /etc/sphinxsearch/rtindex
rt_mem_limit = 700M
rt_field = accountId
rt_field = fname
rt_field = lname
rt_attr_string = fname
rt_attr_string = lname
charset_type = utf-8
charset_table = 0..9, A..Z->a..z, _, -, a..z, U+410..U+42F->U+430..U+44F, U+430..U+44F, U+401->U+451, U+451
}
searchd
{
listen = localhost:9312 # port for API
listen = localhost:9306:mysql41 #port for a SphinxQL
log = /var/log/sphinxsearch/searchd.log
binlog_path = /var/log/sphinxsearch/
query_log = /var/log/sphinxsearch/query.log
query_log_format = sphinxql
pid_file = /var/run/sphinxsearch/searchd.pid
workers = threads
max_matches = 1000
read_timeout = 5
client_timeout = 300
max_children = 30
max_packet_size = 8M
binlog_flush = 2
binlog_max_log_size = 90M
thread_stack = 8M
expansion_limit = 500
rt_flush_period = 1800
collation_server = utf8_general_ci
compat_sphinxql_magics = 0
prefork_rotation_throttle = 100
}
Thanks.
indexer only works with indexes that have a 'source' - ie plain disk indexesd. ie indexer does the stuff in the source to get the data to create the index.
RT (Real Time) indexes work very differently. indexer is not involved with RT indexes at all. They are handled totally by searchd.
To add data to a RT index, you need to run a bunch of SphinxQL commands (INSERT, UPDATE etc) that actually add the data to the index.
(DESCRIBE works, because searchd knows the 'structure' of the index (you told it via the rt_field etc) - even if never inserted any data)
Ah, I think you are asking why the structure is different. That's probably because the index was probably created before, you modified sphinx.conf. If you change the definiton of a RT index, you need to 'destroy' the index, to allow it be recreated again.
The simplest way is to shutdown searchd, delete the index files, delete the binlog (it no longer relevent) and then restart searchd.
searchd --stopwait
rm /etc/sphinxsearch/rtindex*
rm /path/to/binlog* #(you dont define a path, so it must be the default, which varies)
searchd #(starts searchd again)

PostgreSQL, perl and dojo special character issue (æ,ø and å)

I have a webpage made in perl and dojo using a PostgreSQL database. I have to search for availale people in the database and since im from Denmark the letters æ,ø and å has to be available in the search. I thought this was standard when using UTF8 and when I normally program in php over mysql I didn't think it would be that hard.
I have done properly every trick I know to convert this search_word to the right encoding so i can search in the postgre sql database for correct names with æ,ø and å... but it still fails.
i have my perl code making the fetch but this fetch returns 0 rows and when i insert the same command in the psql terminal i get 46 rows returned (copy from "tail -f log terminal" the STDERR statement and inserts it into another terminal connected to the database through the psql command)... the perl code is:
sub dbSearchPersons {
my $search_word = escapeSql($_[0]);
$search_word = Encode::decode_utf8($search_word);
$statement = "SELECT id,name,initials,email FROM person WHERE name ilike '\%".$search_word."\%' OR email ilike '\%".$search_word."\%' OR initials ilike '\%".$search_word."\%' ORDER BY name ASC";
$sth = $dbh->prepare($statement);
$num_rows = $sth->execute();
print STDERR "Statement: " . $statement;
if($num_rows > 0){
$persons = $dbh->selectall_hashref($statement,'id');
}
dbFinish($sth);
webdie($DBI::errstr) if($DBI::errstr);
}
and as you can see i write the SQL statement to STDERR and which outputs the following:
[Fri Apr 27 11:24:26 2012] [error] [client 10.254.0.1] Statement: SELECT id,name,initials,email FROM person WHERE name ilike '%Jørgen%' OR email ilike '%Jørgen%' OR initials ilike '%Jørgen%' ORDER BY name ASC, referer: https://xx.xxx.xxx.xx/cgi-bin/users.cgi
The sql I correctly written (as i can see it through the terminal output above) and if I copy and paste the statement from the terminal and inserts it directly into the psql terminal, i get 46 rows returned as I should... But the perl still wont return any rows.
I don't get it? When formatting a string to display "ø" and not "ø" (as perl translates the UTF8 encoding to, from "J%C3%B8rgen" which gets send through dojo.xhr.post), should I not be able to use it in a SQL statement? Is it because the psql database can have a certain encoding i have to take that into account somehow? Or could it be some completely different?
Hope someone can help me. I have been struggling with this problem for two days now and since the things looks like they should, but don't work I get a little sad :/
Regards,
Thor Astrup Pedersen
You probably forgot to pg_enable_utf8. The database interface will return then Perl character data to you.
$ createdb -e -E UTF-8 -l en_US.UTF-8 -T template0 so10349280
CREATE DATABASE so10349280 ENCODING 'UTF-8' TEMPLATE template0 LC_COLLATE 'en_US.UTF-8' LC_CTYPE 'en_US.UTF-8';
$ echo 'create table person (id int, name varchar, initials varchar, email varchar)'|psql so10349280
CREATE TABLE
$ echo "insert into person (id, name) values (1, 'Jørgensen')"|psql so10349280
INSERT 0 1
$ echo 'select * from person'|psql so10349280
id | name | initials | email
----+-----------+----------+-------
1 | Jørgensen | |
$ perl -Mutf8 -Mstrictures -MDBI -MDevel::Peek -E'
my $dbh = DBI->connect(
"DBI:Pg:dbname=so10349280", $ENV{LOGNAME}, "", { RaiseError => 1, AutoCommit => 1, pg_enable_utf8 => 1}
);
my $r = $dbh->selectall_hashref("select * from person where name = ?", "id", undef, "Jørgensen");
Dump $r->{1}{name};
'
SV = PV(0x836e20) at 0xa58dc8
REFCNT = 1
FLAGS = (POK,pPOK,UTF8)
PV = 0xa5a000 "J\303\270rgensen"\0 [UTF8 "J\x{f8}rgensen"]
CUR = 10
LEN = 16
You don't say quite clear, I think you eventually intend to send out the character data as JSON for use with Dojo. You need to encode them into UTF-8 octets; the various JSON libaries take care of that automatically for you, no need to invoke Encode functions manually.