I have a text file containing hundreds of lines of text containing database info.
I'm trying to extract the DatabaseIds for any database which is 35GB.
I would like to interrogate the file using Powershell and produce a text file containing all the matching databases.
So in essence, I would like to scan through the file, find a DatabaseSize which is 35 and then extract the corresponding DatabaseId from 3 lines previous.
I've looked all over the net but can't seem to find anything which can do this.
Example extract of text file:
ServerId = VMlinux-b71655e1
DatabaseId = db-ecb2e784
Status = completed
LocationId = 344067960796
DatabaseSize = 30
ServerId = VMlinux-0db8d45b
DatabaseId = db-cea2f7a6
Status = completed
LocationId = 344067960796
DatabaseSize = 35
ServerId = VMlinux-c5388693
DatabaseId = db-9a421bf2
Status = completed
LocationId = 344067960796
DatabaseSize = 8
etc
Try with something like this:
(( GC myfile.txt |
Select-String 'DatabaseSize = 35' -Context 3 ).context.precontext)[0]
In case of multiple match:
(GC myfile.txt |
SELECT-STRING 'DATABASESIZE = 35' -Context 3 ) | % { ($_.context.precontext)[0] }
Related
i have set of files in a specific folder,
i want read date from the file name,
select last 2 dates files and move all other dates files to another location using powershell
below is the file name sample i have
Directory: E:\HOLDS\trim
Name
----
17988000412767900-20170402-T
17988000412770804-20170402-T
17988000412773204-20170402-T
17988000412792005-20170402-T
17988000412794300-20170402-T
17991325988242500-20170403-C
17991325988242800-20170403-C
17991325988243000-20170403-C
17991325988245000-20170403-C
17991325988245200-20170403-C
17992327574130910-20170404-T
17992327574131100-20170404-T
17992327574145005-20170404-T
17992327574145209-20170404-T
17992327574169106-20170404-T
17993057054385600-20170405-T
17993326857390200-20170405-R
17993327575638604-20170405-T
17993327575676304-20170405-T
17993327575835705-20170405-T
17993327575844703-20170405-T
17997018695202606-20170409-T
17998001450000100-20170409-C
17998001450001000-20170409-C
17998057920002100-20170409-R
17998119423714112-20170410-T
17998119423728401-20170410-T
17998282230003400-20170409-R
17998297810002500-20170409-R
17998327575543207-20170410-T
17998327575543708-20170410-T
17998327575546104-20170410-T
17998327575547600-20170410-T
17998327575591805-20170410-T
is there any reason why you can't filter by the last write time instead?
$sourcedir = "c:\scripts\"
$files = get-childitem $sourcedir | sort LastwriteTime
# last files
$files[-1].name
# second from last file
$files[-2].Name
From your requirement, if your file name is always follow same style, I think you can do this with this flow:
1. Define an Direcroty/Map which is an
2. Get all the files, then loop the files & split the file name with '-' & take the second as date, then add the into the directory/map
3. Move all files which not satisfied condition out
$maps = New-Object 'system.collections.generic.dictionary[[string],[system.collections.generic.list[string]]]';
$list = New-Object 'system.collections.generic.list[string]';
$files = ("17988000412767900-20170402-T",
"17988000412770804-20170402-T",
"17988000412773204-20170402-T",
"17988000412792005-20170402-T",
"17988000412794300-20170402-T",
"17991325988242500-20170403-C",
"17991325988242800-20170403-C",
"17991325988243000-20170403-C",
"17991325988245000-20170403-C",
"17991325988245200-20170403-C",
"17992327574130910-20170404-T",
"17992327574131100-20170404-T",
"17992327574145005-20170404-T",
"17992327574145209-20170404-T",
"17992327574169106-20170404-T",
"17993057054385600-20170405-T",
"17993326857390200-20170405-R",
"17993327575638604-20170405-T",
"17993327575676304-20170405-T",
"17993327575835705-20170405-T",
"17993327575844703-20170405-T",
"17997018695202606-20170409-T",
"17998001450000100-20170409-C",
"17998001450001000-20170409-C",
"17998057920002100-20170409-R",
"17998119423714112-20170410-T",
"17998119423728401-20170410-T",
"17998282230003400-20170409-R",
"17998297810002500-20170409-R",
"17998327575543207-20170410-T",
"17998327575543708-20170410-T",
"17998327575546104-20170410-T",
"17998327575547600-20170410-T",
"17998327575591805-20170410-T");
$files | %{
$date = $_.Split('-')[1];
if($maps.Keys.Contains($date)){
$maps[$date].Add($_);
}
else{
$maps.Add($date, $list);
$maps[$date].Add($_);
}
}
The output will be:
Key : 20170402
Value : {17988000412767900-20170402-T, 17988000412770804-20170402-T, 17988000412773204-20170402-T, 17988000412792005-20170402-T...}
Key : 20170403
Value : {17988000412767900-20170402-T, 17988000412770804-20170402-T, 17988000412773204-20170402-T, 17988000412792005-20170402-T...}
Key : 20170404
Value : {17988000412767900-20170402-T, 17988000412770804-20170402-T, 17988000412773204-20170402-T, 17988000412792005-20170402-T...}
Key : 20170405
Value : {17988000412767900-20170402-T, 17988000412770804-20170402-T, 17988000412773204-20170402-T, 17988000412792005-20170402-T...}
Key : 20170409
Value : {17988000412767900-20170402-T, 17988000412770804-20170402-T, 17988000412773204-20170402-T, 17988000412792005-20170402-T...}
Key : 20170410
Value : {17988000412767900-20170402-T, 17988000412770804-20170402-T, 17988000412773204-20170402-T, 17988000412792005-20170402-T...}
Then you can order the $map and loop all the files in it's value & move them out.
try this
$yourdirpath="C:\Temp\yourfolder"
$yourmovedir="C:\Temp\movedir"
$numrow=0
$currentdate=""
Get-ChildItem $yourdirpath |
select FullName, #{N="DateFile";E={$_.Name.substring(18, 8)}} |
sort DateFile -Descending |
foreach{
if ($_.DateFile -ne $currentdate)
{
$numrow++
$currentdate=$_.DateFile
}
[pscustomobject]#{FullName=$_.FullName;DateFile=$_.DateFile;numrow=$numrow}
} | where numrow -gt 2 | foreach{move-item $_.FullName -Destination $yourmovedir }
MariaDB> select id,name from t where type='B' and name='Foo-Bar';
+----------------+---------+
| item_source_id | name |
+----------------+---------+
| 2000245 | Foo-Bar |
+----------------+---------+
1 row in set (0.00 sec)
index base_index { # Don't use this directly; it's for inheritance only.
blend_chars = +, &, U+23, U+22, U+27, -, /
blend_mode = trim_none, trim_head, trim_tail, trim_both
}
source b_source : base_source {
sql_query = select id,name from t where type='B'
sql_field_string = name
}
index b_index_lemma : base_index {
source = b_source
path = /path/b_index_lemma
morphology = lemmatize_en_all
}
SphinxQL> select * from b_index_lemma where match('Foo-Bar');
Empty set (0.00 sec)
Other Sphinx queries have results, so the problem isn't e.g. that the index is empty. Yet the hyphenated form does not, and I'd like it to. Am I misusing blend_chars-cum-blend_mode?
There is the question. Sphinx, version 2.1.6. I used to rt(real time) index, but when indexing display message in koncole:
using config file 'sphinx.conf'...
skipping non-plain index 'rt'...
But at a connection to sphinxbase and write query mysql> desc rt - displays:
+------------+--------+
| Field | Type |
+------------+--------+
| id | bigint |
| id | field |
| first_name | field |
| last_name | field |
+------------+--------+
This is default data?? They do not meet my request. How to work with index rt?
Sphinx.conf.
source database
{
type = mysql
sql_host = 127.0.0.1
sql_user = test
sql_pass = test
sql_db = community
sql_port = 3306
mysql_connect_flags = 32 # enable compression
sql_query_pre = SET NAMES utf8
sql_query_pre = SET SESSION query_cache_type=OFF
}
source rt : database
{
sql_query_range = SELECT MIN(id),MAX(id) FROM mbt_accounts
sql_query = SELECT id AS 'accountId', first_name AS 'fname', last_name AS 'lname' FROM mbt_accounts WHERE id >= 0 AND id<= 1000
sql_range_step = 1000
sql_ranged_throttle = 1000 # milliseconds
}
index rt
{
source = rt
type = rt
path = /etc/sphinxsearch/rtindex
rt_mem_limit = 700M
rt_field = accountId
rt_field = fname
rt_field = lname
rt_attr_string = fname
rt_attr_string = lname
charset_type = utf-8
charset_table = 0..9, A..Z->a..z, _, -, a..z, U+410..U+42F->U+430..U+44F, U+430..U+44F, U+401->U+451, U+451
}
searchd
{
listen = localhost:9312 # port for API
listen = localhost:9306:mysql41 #port for a SphinxQL
log = /var/log/sphinxsearch/searchd.log
binlog_path = /var/log/sphinxsearch/
query_log = /var/log/sphinxsearch/query.log
query_log_format = sphinxql
pid_file = /var/run/sphinxsearch/searchd.pid
workers = threads
max_matches = 1000
read_timeout = 5
client_timeout = 300
max_children = 30
max_packet_size = 8M
binlog_flush = 2
binlog_max_log_size = 90M
thread_stack = 8M
expansion_limit = 500
rt_flush_period = 1800
collation_server = utf8_general_ci
compat_sphinxql_magics = 0
prefork_rotation_throttle = 100
}
Thanks.
indexer only works with indexes that have a 'source' - ie plain disk indexesd. ie indexer does the stuff in the source to get the data to create the index.
RT (Real Time) indexes work very differently. indexer is not involved with RT indexes at all. They are handled totally by searchd.
To add data to a RT index, you need to run a bunch of SphinxQL commands (INSERT, UPDATE etc) that actually add the data to the index.
(DESCRIBE works, because searchd knows the 'structure' of the index (you told it via the rt_field etc) - even if never inserted any data)
Ah, I think you are asking why the structure is different. That's probably because the index was probably created before, you modified sphinx.conf. If you change the definiton of a RT index, you need to 'destroy' the index, to allow it be recreated again.
The simplest way is to shutdown searchd, delete the index files, delete the binlog (it no longer relevent) and then restart searchd.
searchd --stopwait
rm /etc/sphinxsearch/rtindex*
rm /path/to/binlog* #(you dont define a path, so it must be the default, which varies)
searchd #(starts searchd again)
I have a CSV that I read in. The problem is that this CSV as columns that are not filled with data. I need to go through a column and when the ID number matches the ID of the content I want to add, add in the data. Currently I have,
$counter = 0
foreach($invoice in $allInvoices){
$knownName += $info | where-Object{$_.'DOCNUMBR'.trim().contains($cleanInvNum)}
$DocNumbr = $info | where-Object{$_.'DOCNUMBR'.trim() -eq $cleanInvNum}
$output = $ResultsTable.Rows.Add(" ", " ", " ", " ", $DocNumbr[$counter].'ORG', $DocNumbr[$counter].'CURNT', $knownName[$counter].'DOCNUMBR', " ")
$counter++
}
The problem with this code is that it just adds rows under the CSV and does not add the data to the row. How can I do a statement where I find ID and add the above content into that row?
I was able to resolve my issue by reworking my foreach loop and setting the values directly like so,
$output.'CheckNumber' = $DocNumbr.'ORTRXAMT'
$output.'CheckDate' = $DocNumbr.'CURTRXAM'
$output.'Invoice Number' = $knownName.'DOCNUMBR'
I have a situation where I need to find an entity matching a given filename. The filename is in this form:
filename1 = "ABCD_126518.pdf";
filename2 = "XYZ_32162.pdf";
In the Oracle DB, I have entities with filename_patterns like the following:
ID | filename_pattern
1 | ABCD_
2 | KLM
3 | XYZ_
I need to find the pattern ID that the given filename matches to. In the given example it should be ID = 1 for filename1 and ID = 3 for filename2. How should the query look like in Java for the named query?
I need something like
SELECT p FROM FilenamePattern p WHERE p.filename_pattern || "%" LIKE :param;
We use Oracle DB and JPA 1.0.
How about,
SELECT p FROM FilenamePattern p WHERE :param LIKE CONCAT(p.filename_pattern, "%")