5 rkhunter warnings came up, should I be worried? [closed] - centos
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I just found rkhunter and decided to run a scan on my CentOS dedicated server, no rootkits found (thank goodness!) but there were warnings, I'm just curious if anyone else has ran into these or if this is something I should worry about or be investigating further?
Here are the warnings I received from rkhunter:
[22:01:58] /sbin/ifdown [ Warning ]
[22:01:58] Warning: The command '/sbin/ifdown' has been replaced by a script: /sbin/ifdown: Bourne-Again shell script text executable
[22:01:58] /sbin/ifup [ Warning ]
[22:01:58] Warning: The command '/sbin/ifup' has been replaced by a script: /sbin/ifup: Bourne-Again shell script text executable
[22:02:05] /usr/bin/GET [ Warning ]
[22:02:05] Warning: The command '/usr/bin/GET' has been replaced by a script: /usr/bin/GET: a /usr/bin/perl -w script text executable
[22:02:05] /usr/bin/ldd [ Warning ]
[22:02:05] Warning: The command '/usr/bin/ldd' has been replaced by a script: /usr/bin/ldd: Bourne-Again shell script text executable
[22:02:07] /usr/bin/whatis [ Warning ]
[22:02:07] Warning: The command '/usr/bin/whatis' has been replaced by a script: /usr/bin/whatis: POSIX shell script text executable
[22:03:03] Info: SCAN_MODE_DEV set to 'THOROUGH'
[22:03:05] Checking /dev for suspicious file types [ Warning ]
[22:03:05] Warning: Suspicious file types found in /dev:
[22:03:05] /dev/md/autorebuild.pid: ASCII text
[22:03:05] /dev/md/md-device-map: ASCII text
[22:03:05] /dev/.udev/queue.bin: Applesoft BASIC program data
[22:03:05] /dev/.udev/db/block:md0: ASCII text
[22:03:05] /dev/.udev/db/block:md1: ASCII text
[22:03:05] /dev/.udev/db/block:sda1: ASCII text
[22:03:05] /dev/.udev/db/net:eth1: ASCII text
[22:03:05] /dev/.udev/db/net:eth0: ASCII text
[22:03:05] /dev/.udev/db/block:sdb3: ASCII text
[22:03:05] /dev/.udev/db/block:sdb1: ASCII text
[22:03:05] /dev/.udev/db/block:sda3: ASCII text
[22:03:05] /dev/.udev/db/block:sda2: ASCII text
[22:03:05] /dev/.udev/db/block:sdb2: ASCII text
[22:03:05] /dev/.udev/db/input:event2: ASCII text
[22:03:05] /dev/.udev/db/input:event0: ASCII text
[22:03:05] /dev/.udev/db/block:sda: ASCII text
[22:03:05] /dev/.udev/db/block:sdb: ASCII text
[22:03:05] /dev/.udev/db/input:event4: ASCII text
[22:03:05] /dev/.udev/db/input:mouse1: ASCII text
[22:03:05] /dev/.udev/db/input:event3: ASCII text
[22:03:05] /dev/.udev/db/input:event1: ASCII text
[22:03:05] /dev/.udev/db/block:ram9: ASCII text
[22:03:05] /dev/.udev/db/block:ram8: ASCII text
[22:03:05] /dev/.udev/db/block:ram4: ASCII text
[22:03:05] /dev/.udev/db/block:ram5: ASCII text
[22:03:05] /dev/.udev/db/block:ram7: ASCII text
[22:03:05] /dev/.udev/db/block:ram6: ASCII text
[22:03:05] /dev/.udev/db/block:ram3: ASCII text
[22:03:06] /dev/.udev/db/block:ram2: ASCII text
[22:03:06] /dev/.udev/db/block:ram15: ASCII text
[22:03:06] /dev/.udev/db/block:ram14: ASCII text
[22:03:06] /dev/.udev/db/block:ram13: ASCII text
[22:03:06] /dev/.udev/db/block:ram12: ASCII text
[22:03:06] /dev/.udev/db/block:ram0: ASCII text
[22:03:06] /dev/.udev/db/block:ram1: ASCII text
[22:03:06] /dev/.udev/db/block:ram11: ASCII text
[22:03:06] /dev/.udev/db/block:ram10: ASCII text
[22:03:06] /dev/.udev/db/block:loop7: ASCII text
[22:03:06] /dev/.udev/db/block:loop3: ASCII text
[22:03:06] /dev/.udev/db/block:loop5: ASCII text
[22:03:06] /dev/.udev/db/block:loop4: ASCII text
[22:03:06] /dev/.udev/db/block:loop6: ASCII text
[22:03:06] /dev/.udev/db/block:loop1: ASCII text
[22:03:06] /dev/.udev/db/block:loop2: ASCII text
[22:03:06] /dev/.udev/db/block:loop0: ASCII text
[22:03:06] /dev/.udev/db/usb:2-1: ASCII text
[22:03:06] /dev/.udev/db/usb:1-1: ASCII text
[22:03:06] /dev/.udev/db/usb:3-7.1: ASCII text
[22:03:06] /dev/.udev/db/usb:3-7: ASCII text
[22:03:06] /dev/.udev/db/usb:usb1: ASCII text
[22:03:06] /dev/.udev/db/usb:usb3: ASCII text
[22:03:06] /dev/.udev/db/usb:usb4: ASCII text
[22:03:06] /dev/.udev/db/usb:usb2: ASCII text
[22:03:06] /dev/.udev/rules.d/99-root.rules: ASCII text
[22:03:06] Checking for hidden files and directories [ Warning ]
[22:03:06] Warning: Hidden directory found: /dev/.mdadm
[22:03:06] Warning: Hidden directory found: /dev/.udev
[22:03:06] Warning: Hidden file found: /usr/share/man/man5/.k5login.5.gz: gzip compressed data, from Unix, max compression
[22:03:06] Warning: Hidden file found: /usr/share/man/man5/.k5identity.5.gz: gzip compressed data, from Unix, max compression
[22:03:06] Warning: Hidden file found: /usr/share/man/man1/..1.gz: gzip compressed data, from Unix, max compression
[22:03:06] Warning: Hidden file found: /usr/bin/.fipscheck.hmac: ASCII text
[22:03:06] Warning: Hidden file found: /usr/bin/.ssh.hmac: ASCII text
[22:03:06] Warning: Hidden file found: /usr/sbin/.sshd.hmac: ASCII text
Running CentOS 7.3.1611 here and recently found rkhunter warning about some commands too:
Warning: The command '/usr/sbin/ifdown' has been replaced by a script: /usr/sbin/ifdown: Bourne-Again shell script, ASCII text executable
Warning: The command '/usr/sbin/ifup' has been replaced by a script: /usr/sbin/ifup: Bourne-Again shell script, ASCII text executable
Warning: The command '/usr/bin/egrep' has been replaced by a script: /usr/bin/egrep: POSIX shell script, ASCII text executable
Warning: The command '/usr/bin/fgrep' has been replaced by a script: /usr/bin/fgrep: POSIX shell script, ASCII text executable
First, I found where those commands belong to:
# rpm -qf /usr/sbin/ifdown /usr/sbin/ifup /usr/bin/egrep /usr/bin/fgrep
initscripts-9.49.37-1.el7_3.1.x86_64
initscripts-9.49.37-1.el7_3.1.x86_64
grep-2.20-2.el7.x86_64
grep-2.20-2.el7.x86_64
Then, I verified those packages:
# rpm -V initscripts grep && echo OK
OK
Finally, I added these lines to /etc/rkhunter.conf.local to disable those warnings:
SCRIPTWHITELIST=/usr/sbin/ifdown
SCRIPTWHITELIST=/usr/sbin/ifup
SCRIPTWHITELIST=/usr/bin/fgrep
SCRIPTWHITELIST=/usr/bin/egrep
And checked again:
# rkhunter --check --rwo && echo OK
OK
To check wether the files were infected you can check the packages, that include those files, for example for /usr/bin/ldd use
apt install debsums apt-file
apt-file update
debsums $(apt-file search -F --package-only /usr/bin/ldd)
If you only see OK you are free to add the file to the rkhunter ignore list:
for example add these lines to `/etc/rkhunter.conf.local` to disable those warnings:
SCRIPTWHITELIST=/usr/sbin/ifdown
SCRIPTWHITELIST=/usr/sbin/ifup
SCRIPTWHITELIST=/usr/bin/fgrep
SCRIPTWHITELIST=/usr/bin/egrep
The other warnings can also be disabled, see ´/etc/rkhunter.conf´
ifdown and ldd and so on, they are bash shell scripts which can execute by shell directly.
file /sbin/ifdown
you can get the detail information.
of course, there are some hidden files(whose name start .) and dev files which cause warnings, it's noamal.
Related
VSCODE how to reopen or save file with ASCII encoding
I have text file that contains an SQL that queries a phrase in hebrew. I wrote the SQL in VSCODE using utf-8 encoding. Now i have to run this sql using a teradata utility called bteq. this utility uses ascii encoding. I tried to reopen or save the file in VSCODE with new encoding but couldn't find an ascii encoding. In ultra-edit editor there is an ascii encoding and when i copy paste the sql to ultra-edit and save it with ascii encoding the sql runs successfully. Is there any way to use VSCODE to save the file in ascii encoding? Update: 2022-05-23 Following the comments: The reason i need this is that i work with a teradata database. The tool with which which the SQL is written is called "Teradata Assistant". Teradata assistant version is: 16.20.0.9 2019-10-25 Database version is: 16.20.53.48 With this tool i can write the sql with hebrew letters and save it as a text file. When i open the file with teradata assistant the file is displayed correctly with the hebrew letters. However when i open the same file with VSCODE using UTF-8 all the hebrew letters are replaced with this symbol: � Finally when i open the file with notepad with ANSI encoding the file is opened with the hebrew letters displayed correctly. I find it odd that VSCODE can't do something that notepad can do..
how to find the encoding type of configuration.properties files in powershell [duplicate]
This isn't really a programming question, is there a command line or Windows tool (Windows 7) to get the current encoding of a text file? Sure I can write a little C# app but I wanted to know if there is something already built in?
Open up your file using regular old vanilla Notepad that comes with Windows. It will show you the encoding of the file when you click "Save As...". It'll look like this: Whatever the default-selected encoding is, that is what your current encoding is for the file. If it is UTF-8, you can change it to ANSI and click save to change the encoding (or visa-versa). I realize there are many different types of encoding, but this was all I needed when I was informed our export files were in UTF-8 and they required ANSI. It was a onetime export, so Notepad fit the bill for me. FYI: From my understanding I think "Unicode" (as listed in Notepad) is a misnomer for UTF-16. More here on Notepad's "Unicode" option: Windows 7 - UTF-8 and Unicdoe
If you have "git" or "Cygwin" on your Windows Machine, then go to the folder where your file is present and execute the command: file * This will give you the encoding details of all the files in that folder.
The (Linux) command-line tool 'file' is available on Windows via GnuWin32: http://gnuwin32.sourceforge.net/packages/file.htm If you have git installed, it's located in C:\Program Files\git\usr\bin. Example: C:\Users\SH\Downloads\SquareRoot>file * _UpgradeReport_Files; directory Debug; directory duration.h; ASCII C++ program text, with CRLF line terminators ipch; directory main.cpp; ASCII C program text, with CRLF line terminators Precision.txt; ASCII text, with CRLF line terminators Release; directory Speed.txt; ASCII text, with CRLF line terminators SquareRoot.sdf; data SquareRoot.sln; UTF-8 Unicode (with BOM) text, with CRLF line terminators SquareRoot.sln.docstates.suo; PCX ver. 2.5 image data SquareRoot.suo; CDF V2 Document, corrupt: Cannot read summary info SquareRoot.vcproj; XML document text SquareRoot.vcxproj; XML document text SquareRoot.vcxproj.filters; XML document text SquareRoot.vcxproj.user; XML document text squarerootmethods.h; ASCII C program text, with CRLF line terminators UpgradeLog.XML; XML document text C:\Users\SH\Downloads\SquareRoot>file --mime-encoding * _UpgradeReport_Files; binary Debug; binary duration.h; us-ascii ipch; binary main.cpp; us-ascii Precision.txt; us-ascii Release; binary Speed.txt; us-ascii SquareRoot.sdf; binary SquareRoot.sln; utf-8 SquareRoot.sln.docstates.suo; binary SquareRoot.suo; CDF V2 Document, corrupt: Cannot read summary infobinary SquareRoot.vcproj; us-ascii SquareRoot.vcxproj; utf-8 SquareRoot.vcxproj.filters; utf-8 SquareRoot.vcxproj.user; utf-8 squarerootmethods.h; us-ascii UpgradeLog.XML; us-ascii
Another tool that I found useful: https://archive.codeplex.com/?p=encodingchecker EXE can be found here
Install git ( on Windows you have to use git bash console). Type: file --mime-encoding * for all files in the current directory , or file --mime-encoding */* for the files in all subdirectories
Here's my take how to detect the Unicode family of text encodings via BOM. The accuracy of this method is low, as this method only works on text files (specifically Unicode files), and defaults to ascii when no BOM is present (like most text editors, the default would be UTF8 if you want to match the HTTP/web ecosystem). Update 2018: I no longer recommend this method. I recommend using file.exe from GIT or *nix tools as recommended by #Sybren, and I show how to do that via PowerShell in a later answer. # from https://gist.github.com/zommarin/1480974 function Get-FileEncoding($Path) { $bytes = [byte[]](Get-Content $Path -Encoding byte -ReadCount 4 -TotalCount 4) if(!$bytes) { return 'utf8' } switch -regex ('{0:x2}{1:x2}{2:x2}{3:x2}' -f $bytes[0],$bytes[1],$bytes[2],$bytes[3]) { '^efbbbf' { return 'utf8' } '^2b2f76' { return 'utf7' } '^fffe' { return 'unicode' } '^feff' { return 'bigendianunicode' } '^0000feff' { return 'utf32' } default { return 'ascii' } } } dir ~\Documents\WindowsPowershell -File | select Name,#{Name='Encoding';Expression={Get-FileEncoding $_.FullName}} | ft -AutoSize Recommendation: This can work reasonably well if the dir, ls, or Get-ChildItem only checks known text files, and when you're only looking for "bad encodings" from a known list of tools. (i.e. SQL Management Studio defaults to UTF16, which broke GIT auto-cr-lf for Windows, which was the default for many years.)
A simple solution might be opening the file in Firefox. Drag and drop the file into firefox Press Ctrl+I to open the page info and the text encoding will appear on the "Page Info" window. Note: If the file is not in txt format, just rename it to txt and try again. P.S. For more info see this article.
I wrote the #4 answer (at time of writing). But lately I have git installed on all my computers, so now I use #Sybren's solution. Here is a new answer that makes that solution handy from powershell (without putting all of git/usr/bin in the PATH, which is too much clutter for me). Add this to your profile.ps1: $global:gitbin = 'C:\Program Files\Git\usr\bin' Set-Alias file.exe $gitbin\file.exe And used like: file.exe --mime-encoding *. You must include .exe in the command for PS alias to work. But if you don't customize your PowerShell profile.ps1 I suggest you start with mine: https://gist.github.com/yzorg/8215221/8e38fd722a3dfc526bbe4668d1f3b08eb7c08be0 and save it to ~\Documents\WindowsPowerShell. It's safe to use on a computer without git, but will write warnings when git is not found. The .exe in the command is also how I use C:\WINDOWS\system32\where.exe from powershell; and many other OS CLI commands that are "hidden by default" by powershell, *shrug*.
you can simply check that by opening your git bash on the file location then running the command file -i file_name example user filesData $ file -i data.csv data.csv: text/csv; charset=utf-8
Some C code here for reliable ascii, bom's, and utf8 detection: https://unicodebook.readthedocs.io/guess_encoding.html Only ASCII, UTF-8 and encodings using a BOM (UTF-7 with BOM, UTF-8 with BOM, UTF-16, and UTF-32) have reliable algorithms to get the encoding of a document. For all other encodings, you have to trust heuristics based on statistics. EDIT: A powershell version of a C# answer from: Effective way to find any file's Encoding. Only works with signatures (boms). # get-encoding.ps1 param([Parameter(ValueFromPipeline=$True)] $filename) begin { # set .net current directoy [Environment]::CurrentDirectory = (pwd).path } process { $reader = [System.IO.StreamReader]::new($filename, [System.Text.Encoding]::default,$true) $peek = $reader.Peek() $encoding = $reader.currentencoding $reader.close() [pscustomobject]#{Name=split-path $filename -leaf BodyName=$encoding.BodyName EncodingName=$encoding.EncodingName} } .\get-encoding chinese8.txt Name BodyName EncodingName ---- -------- ------------ chinese8.txt utf-8 Unicode (UTF-8) get-childitem -file | .\get-encoding
Looking for a Node.js/npm solution? Try encoding-checker: npm install -g encoding-checker Usage Usage: encoding-checker [-p pattern] [-i encoding] [-v] Options: --help Show help [boolean] --version Show version number [boolean] --pattern, -p, -d [default: "*"] --ignore-encoding, -i [default: ""] --verbose, -v [default: false] Examples Get encoding of all files in current directory: encoding-checker Return encoding of all md files in current directory: encoding-checker -p "*.md" Get encoding of all files in current directory and its subfolders (will take quite some time for huge folders; seemingly unresponsive): encoding-checker -p "**" For more examples refer to the npm docu or the official repository.
Similar to the solution listed above with Notepad, you can also open the file in Visual Studio, if you're using that. In Visual Studio, you can select "File > Advanced Save Options..." The "Encoding:" combo box will tell you specifically which encoding is currently being used for the file. It has a lot more text encodings listed in there than Notepad does, so it's useful when dealing with various files from around the world and whatever else. Just like Notepad, you can also change the encoding from the list of options there, and then saving the file after hitting "OK". You can also select the encoding you want through the "Save with Encoding..." option in the Save As dialog (by clicking the arrow next to the Save button).
The only way that I have found to do this is VIM or Notepad++.
EncodingChecker File Encoding Checker is a GUI tool that allows you to validate the text encoding of one or more files. The tool can display the encoding for all selected files, or only the files that do not have the encodings you specify. File Encoding Checker requires .NET 4 or above to run.
Can we replace a char with its decimal escape code?
str = "tést"; I want to replace above é with its decimal escape codeé. I tried below, str=str.replaceAll("é","é"); but failed to run build because, worning: é is unmappable character for encoding utf8. So, Is there any way to find a char with decimal point 233 from a string and replace it. Answer for below comment: here is the build warning D:\workspace\ProName\build\src\com\Pro\ps\ProRetrieve.java:381: warning: unmappable character for encoding UTF8 [javac] array[i] = array[i].replaceAll("�", "é"); [javac] ^ Here when we run build it would create jar file. We are running this jar file using .bat file below is the code for bat file to run jar # ECHO OFF D: cd D:\workspace\ProName\build\target java -jar ProApp.jar
Your source file is encoded in the Windows default, which is probably Windows-1252 (ISO-8859-1) but the javac command is expecting UTF-8. Ideally you would change your file to UTF-8 but you can also specify the option -encoding ISO-8859-1 on the javac command line. In an IDE you will need to find where you set the source encoding, and I don't remember offhand where that is set in Eclipse and don't have access to my Eclipse system right now.
javac is the compiler, so it's the compiler that is choking on the character, not anything at runtime. You've tried to replace it with the way HTML escapes characters, not the way Java wants you to embed Unicode values into its source files, such as with String str = "t\u00e9st"; where \uXXXX uses 4 hexadecimal digits to represent the Unicode value of the character.
pandoc: Cannot decode byte '\xd0': Data.Text.Encoding.Fusion.streamUtf8: Invalid UTF-8 stream
I'm getting this error when I made pandoc --filter pandoc-citeproc myfile.markdown myfile.pdf pandoc: Cannot decode byte '\xd0': Data.Text.Encoding.Fusion.streamUtf8: Invalid UTF-8 stream I have searched here and here, but I have double checked from the text editor and my file is UTF-8 encoded. It has accented Spanish characters, but the same command worked without anyproblem in the past. Any pointers to a solution would be appreciated.
My bad. The problem is related with the command I use to tell pandoc to create the pdf ouput. The proper command should be: pandoc --filter pandoc-citeproc myfile.markdown -o myfile.pdf note the -o flag between the input markdown file and the ouput pdf file. That's why I got the same utf-8 message that the people trying to convert from pdf to other formats documented in my links.
Check JabRef encoding In my case, I bumped into a similar error when converting Pandoc Markdown to XHTML. The culprit was a set of BibTeX citations which JabRef had encoded by default in ISO8859_1. This default JabRef behaviour can be changed once and for all by setting Default encoding: to UTF8 in JabRef's Options > Preferences > General menu.
Restore original encoding in Sublime Text 2
I opened a file and didn't noticed that it was in windows-1251 encoding. It was opened as utf-8 encoded file with incorrect characters. Then I pasted there a bunch of code in utf-8 encoding. After saving (with some error message about falling back to UTF-8) I can't restore file's original content. I reopen new file, cut all pasted code and save it. Nether "reopen with encoding" nor "save with encoding" don't give the correct-encoded file. iconv -f UTF-8 -t WINDOWS-1251 file.txt > file_1251.txt Iconv says there's an illegal input sequence.
It looks like it's still in Windows-1251. Decoding the original file incorrectly as UTF-8 and overwriting wouldn't result in a file that is incorrect UTF-8 and so you wouldn't see the error. Try iconv -f Windows-1251 -t UTF-8 file.txt > file_UTF8.txt And open the UTF-8 file normally as UTF-8.