How to replace xml string with special character in file - preg-replace

So I have .xml file and I want to replace first xml string which says
<some text="chageThis"> and I want to replace it with <some text="NewText"> and I need to specify path of my file in .sh scrypt. I tried from here
sed 's/file="[^"]*"/file="\<some text="chageThis">/' /path/to/my.xml > /path/to/my.xml
but it does not work

Related

How to prevent sed from changing linebreaks?

I am changing content of a .js file with a .sh file but there's an unwanted side-effect.
.sh file:
# update file
version="0.1"
file="Test.js"
updated_line="const version = \"$version\";"
sed -i "1s/.*/$updated_line/" $file
Test.js file:
const version = "0";
const moreVars = {};
After running the .sh file the update seems to be successfull but my IDE is complaining that the linebreaks changed from CRLF to LF:
How can I make these changes while leaving the linebreaks the way they were?
sed is not linebreak aware, in that sense, CR character is just that CR character. If you want your line to have CR character, you have to add CR character. For sed it's just a character like any other.
updated_line="const version = \"$version\";"$'\r'
How can I make these changes while leaving the linebreaks the way they were?
Write a regex that matches CR character on the end of the line if it exists and preserve it.
s/.*\(\x0d\?\)$/$updated_line\1/

COPY Postgres table with Delimiter as double byte

I want to copy a Postgres (version 11) table into a csv file with delimiter as double byte character. Please assist if this can be achieved.
I am trying this:
COPY "Tab1" TO 'C:\Folder\Tempfile.csv' with (delimiter E'অ');
Getting an error:
COPY delimiter must be a single one-byte character
You could use COPY TO PROGRAM. On Unix system that could look like
COPY "Tabl" TO PROGRAM 'sed -e ''s/|/অ/g'' > /outfile.csv' (FORMAT 'csv', delimiter '|');
Choose a delimiter that does not occur in the data. On Windows, perhaps you can write a Powershell command that translates the characters.

Using Perl with Sed and Capture Groups to add another string to end of a string

I have a requirement to go through each file and add a new string at the end of a particular statement in the file. I already have the list of each such file (actually each file is a SAS code) having this statement . My aim is to edit each file in-place after creating a backup first.So i have decided to use PERL to do this in-place editing on a AIX 7.1 machine.
The particular statement that i intend to add to in each file will always have 3 keywords :FILENAME, FTP and HOST identifying such a statement and it is always terminated by a semicolon. The statement can also occur multiple times in same file.
Example of the statement in the file is:
FILENAME IN FTP "" LS HOST=XXXX USER=XXXX PASS=XXXX ;
The same type of statement also be in multiple lines as well with some additional options on the statement.
FILENAME Test FTP "Sample.xls"
CD="ABCDEFG"
USER=XXXXX
PASS=XXXXX
HOST=XXXXX
BINARY
;
OR
filename Novell ftp &pitalist.
HOST=&HOST.
USER="XXXXXXXX"
PASS="XXXXXXX"
DEBUG
LRECL=10000;
My aim is add a new string : %ftps_opts at the end of the above string just before ending semicolon.There should be atleast one space or a newline between existing statement and this new string as shown below.
FILENAME IN FTP "" LS HOST=XXXX USER=XXXX PASS=XXXX %ftps_opts;
FILENAME Test FTP "Sample.xls"
CD="ABCDEFG"
USER=XXXXX
PASS=XXXXX
HOST=XXXXX
BINARY
%ftps_opts;
filename Novell ftp &pitalist.
HOST=&HOST.
USER="XXXXXXXX"
PASS="XXXXXXX"
DEBUG
LRECL=10000 %ftps_opts;
Is there a way to use Capture group and PERL to capture the existing statement in each file just before the semicolon and then append the new string at the end of it with a space or newline? The Input.txt files has list of files having the FILENAME FTP statement as shown above.
Something like this :
#!/bin/bash
input="~/Input.txt"
while IFS= read -r line
do
echo "$line"
perl -p -i.orig -e 's/(capture group)/\1 %ftps_opts /gi' "$line"
echo "done"
done < "$input"
Thank you.
You can tell Perl to process the whole file instead of processing it line by line:
perl -0777 -pe 's/(filename[^;]*ftp[^;]*host[^;]*)/$1 %ftps_opts/gi' -- file

Rename Multiple Text file using Batch file

How would I write a batch file to rename multiple text files?
Suppose we have to rename 200 files as below
ABC_Suman_156smnhk.txt,
ABC_Suman_73564jsdlfm.txt,
ABC_Suman_9864yds7mjf45mj.txt
To
MNC_Ranj_156smnhk.txt,
MNC_Ranj_73564jsdlfm.txt,
MNC_Ranj_9864yds7mjf45mj.txt
Note: I need this ABC_Suman part only changed to MNC_Ranj
Any help would be appreciated.
To perform a batch rename, the basic command looks like this:
for filename in foo; do echo mv \"$filename\" \"${filename//foo/bar}\"; done > rename.txt
The command works as follows:
The for loop goes through all files with name foo in the current directory.
For each filename, it constructs and echoes a command of the form mv “filename” “newfilename”, where the filename and new file name are surrounded by double quotes (to account for spaces in the file name) and the new file name has all instances of foo replaced with bar. The substitution function ${filename//foo/bar} has two slashes (//) to replace every occurrence of foo with bar.
Finally, the entire output is saved to rename.txt for user review to ensure that the rename commands are being generated correctly.
i took it from the following link:
http://www.peteryu.ca/tutorials/shellscripting/batch_rename
#echo off
setlocal enableDelayedExpansion
for %%F in (ABC_Suman*.txt) do (
set "name=%%F"
ren "!name!" "!name:ABC_Suman=MNC_Ranj!"
)

matlab, textfile

I have a bunch of text files which have both strings and numbers in it, but the string are just in the first few rows.
I'm trying to write a script which go in to my folder search all the file in the folder and delete the text from the files and write the rest as it is in the new text file.
Does anybody know how?
I don't think this is a good use of MATLAB.
I think you'd be better off scripting this in Python or shell. Here is one way you could do it with tr in shell if you're on *nix or mac and if your files are all in the same directory and all have the file extension .txt:
#!/bin/sh
for i in `ls *.txt`
do
cat $i | tr -d "[:alpha:]" > $i.tr.txt
done
To run. save the code above as a file, make it executable (chmod a+x filename), and run it in the directory with your text files.
If the number of string lines is always the same, you can use textread() with 'headerlines' option to skip over those string lines, then write the entire text buffer out.