How do I know this is not a virus? - raspberry-pi

So I want to install XMrig on the RPI, I happen to find the following article
https://dev.to/ijason/cpu-mining-on-a-raspberry-pi-1e1d
I wanted to know if anything in there is not written, I do have a pool ID and everything I just don't know if any packages contain any damaging effects to my RPI. (Reason, why I am mining, is for experimental purposes I know I won't gain much)

Submit files to Virustotal:
Virustotal website
The website search the cybersecurity community uploads and check if any of the binaries or URLs were already reported as malicious.
Also, you can use ShiftLeftScan for Python code, Github code, etc:
wget https://github.com/ShiftLeftSecurity/sast-scan/releases/download/v1.9.27/scan
chmod +x scan
sh <(curl https://slscan.sh)
sudo apt install docker.io
sudo systemctl enable --now docker
sudo usermod -aG docker USER
sudo docker run --rm -e "WORKSPACE=${PWD}" -v "$PWD:/app" shiftleft/sast-scan scan
https://github.com/ShiftLeftSecurity/sast-scan

Related

List directory /var/lib/apt/lists/partial is missing. - Acquire (20: Not a directory)

When I executed sudo apt update I'm getting
Reading package lists... Done
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (20: Not a directory)
Also, I was getting a status error which I solved using
sudo cp /var/lib/dpkg/status-old /var/lib/dpkg/status
I tried sudo mkdir /var/lib/apt/lists/partial as suggested in few other threads
mkdir: cannot create directory ‘/var/lib/apt/lists/partial’: Not a directory
Even I tried sudo mkdir /var/lib/apt/lists/
Any other solution?
The answer may be inappropriate here. But as I came here others may land here too.
If you're using docker and you face the same issue you can do like the following.
USER root
# RUN commands
USER 1001
Reference: Link
You can try adding -u 0 in the command
sudo docker exec -u 0 -it ContainerID bin/bash
According to Docker, the u flag defines what username or UID in the system for the container to run as, setting -u 0 means you run the container as root, use it with caution! Reference here
The same happened to me. I follow as guide this answer:The package lists or status file could not be parsed or opened
I assumed my lists were corrupted. I went to /var/lib/apt/ I saw a file (lists#) instead of a directory. I deleted it (sudo rm lists) and re-created the path (sudo mkdir -p /var/lib/apt/lists/partial). Double-check the path gets created.
I ran into the same issue while trying to build a new container and experimenting with Dockerfile for a while.
What saved me finally was just to delete all containers I've created during this process using docker rm.
I had this same issue when trying to install an Typora on Ubuntu 20.04.
I was running into the error whenever I run the command below:
# add Typora's repository
sudo add-apt-repository 'deb https://typora.io/linux ./'
Here's how I solved it:
I disconnected and reconnected my network connection, and when I ran the command again, it worked fine.
I think it was an issue with my network connectivity.
That's all.
I hope this helps
I had a similar error when using bitnami spark image and docker exec command with arguments -u didn't work for me. I found my answer in the image documentation here.
In case you are using a docker image, it might be that the image is a non root container image. Read the documents of the docker image provider to find the solution to see how you can use the image as a root container image.
this is how it works access as root in docker bash and install your apps
get id container by name
sudo docker ps -aqf "name=name=es01"
access bash as root
sudo docker exec -u 0 -it 3d42134dfd59 bash
Example install:
apt get update
apt-get install nano
You first need to have super user privilege by typing in sudo -i and then inserting your password.

apt-get update stuck in my K8s, fail to install any tool to debug

I am new in K8s world and I am using helm to install the stable/mysql template then I would like to test it.
I run below to spawn a new ubuntu container as mysql client. However the apt-get update stuck at "Waiting for headers" always.
kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
root#ubuntu:/# apt-get update
0% [Waiting for headers] [Waiting for headers]
I think it is network issue, but I am not able to install any tool to debug as the apt-get not work.
I tried couple of ways like modifying the /etc/resolv.conf, but it seems it doesn't help.
Anyone can share me some lights about how to proceed?
Thanks!
Follow this to setup the proxy will make it work. https://askubuntu.com/questions/109673/how-to-use-apt-get-via-http-proxy-like-this
Add below content to file /etc/apt/apt.conf.
Acquire::http::Proxy "http://proxy.server.port:8080";

Docker Lamp Centos7: '/bin/sh -c systemctl start httpd.service' returned a non-zero code: 1

I'm starting to work with docker to automate envorinments, then I'm trying to build a simple LAMP so the Dockerfile is the following:
FROM centos:7
ENV container=docker
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
RUN yum -y update; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
RUN yum -y update && yum clean all
RUN yum -y install firewalld httpd mariadb-server mariadb php php-mysql php-gd php-pear php-xml php-bcmath php-mbstring php-mcrypt php-php-gettext
#Enable services
RUN systemctl enable httpd.service
RUN systemctl enable mariadb.service
#start services
RUN systemctl start httpd.service
RUN systemctl start mariadb.service
#Open firewall ports
RUN firewall-cmd --permanent --add-service=http
RUN firewall-cmd --permanent --add-service=https
RUN firewall-cmd --reload
EXPOSE 80
CMD ["/usr/sbin/init"]
so when I build the image
docker build -t myimage .
Then when I run the code I get the following mistake:
The command '/bin/sh -c systemctl start httpd.service' returned a non-zero code: 1
When I enter to interactive mode(jumping the commands after RUN systemctl start httpd.service and rebuidling the image):
docker run -t -i myimage /bin/bash
And after try to start manually the service httpd I get the following mistake:
Failed to get D-Bus connection: No connection to service manager.
so, I don't know what am I doing wrong?
First of all, welcome to Docker! :-) Loads of Docker tutorials and docs are written around Ubuntu containers, but I like Centos too.
Ok, there are a couple of things to talk about here:
You're running up against a known issue with systemd-based Docker containers where they seem to need extra privileges to run, and even then lots of extra config is required to get them working. The Red Hat team are experimenting with some fixes (mentioned in comments) but not sure where that's at.
If you wish to try getting it working, these are the best instructions I've found, but I've played with this several times in the last couple of weeks and not got it working yet.
What people might say is "the real issue" here is that a Docker container should not be thought of as a "mini Virtual Machine". Docker is designed to run one "root" process per container, and the container system makes it easy to compose multiple containers together - they are small on disk, light on memory usage and easy to network together.
Here's a blog post from Docker which gives some background on this. There's also the "Docker Fundamentals" docs on Dockerizing applications and Working with containers.
So arguably the best way to proceed with the setup you're attempting to create here (though it might sound more complicated at the beginning) is to break your "stack" up into the services you need, and then use a tool like docker-compose (introduction, documentation) to create single-purpose Docker containers as required.
In your case above, you have two services, a web server and a database server. Therefore two Docker containers should work well, connected together by the database network connection. Here are some examples:
example with Symfony app, nginx and MariaDB
example with MariaDB + NodeJS
If you run one service per Docker container, you don't need to use systemd to manage them, as the Docker daemon manages each container sort of like it is a Unix process. When the process dies, the Docker container dies, and this is important because the Docker server monitors containers and can restart them automatically, or notify you.
This looks more like a perfect example where my docker-systemctl-replacement would fit into. It can easily interpret "systemctl start httpd.service" without an active SystemD around. I have done the same for some database services but not specifically the mariadb.service - may be you could give it a try.

Cannot use commands 'postgres' or 'pg_ctl'

I am on Unix. I have got postgresql-9.3 installed.
When I want to start the server using pg_ctl or postgres, the terminal gives me:
The program 'postgres' is currently not installed. You can install it by typing:
sudo apt-get install postgres-xc
Can't I start the server without this postgres-xc?
This must be remnants of the postgres-xc package you had installed previously.
Since you just installed postgresql-9.3 and don't seem to have any databases in use, yet, I suggest to completely purge all postgres packages.
sudo apt-get purge postgresql-9.2
sudo apt-get purge postgresql-xc
...
Until there's nothing left:
dpkg -l | grep postgres
Then start from scratch. Your instance of pg_ctl seems to belong to the package postgres-xc. This should be gone after you've uninstalled the package. Find out with one of these commands:
dpkg -S pg_ctl
dlocate pg_ctl
apt-file search pg_ctl
pg_ctlcluster is provided by the package postgresql-common.
pg_ctl is provided by the package postgresql-9.3.
More about starting Postgres in the manual.
It is possible you might be missing a few things.
Try:
sudo apt-get install postgresql-client and
sudo apt-get install postgresql postgresql-contrib
The message about installing xc is a dud, it's probably suggesting that based on what it scanned inside the xc repositories.
Here's a good reference to this problem and its solution:
https://dba.stackexchange.com/questions/72580/missing-the-pg-ctl-package-in-postgres-9-3-installation
Due to reasons a normal install of postgres will not place the postgres binary file in the path.
Adding the right directory to the path solves the problem (temporarily).
PATH=/usr/lib/postgresql/9.3/bin:$PATH
To make it permanent on my Ubuntu machine I added the line to /etc/environment this makes it work for all users.
The correct way to set the PATH is different for different systems, for more info see see:
How to permanently set $PATH on Linux?
You must install postgresql-client:
sudo install postgresql-client
Try to enter this command to the console:
sudo -u postgres psql

I have installed PostgreSQL via MacPorts, but cannot access it

As I said in title, I've installed PostgreSQL usind MacPorts, but cannot access it.
The installation process was
$ sudo port install postgresql83-server
$ sudo mkdir -p /opt/local/var/db/postgresql83/webcraft
$ sudo chown postgres:postgres /opt/local/var/db/postgresql83/webcraft
$ sudo su postgres -c '/opt/local/lib/postgresql83/bin/initdb -D /opt/local/var/db/postgresql83/webcraft'
$ sudo launchctl load -w /Library/LaunchDaemons/org.macports.postgresql83-server.plist
My PATH is
/opt/local/lib/postgresql83/bin:/opt/local/lib/mysql5/bin:/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin
I try to connect the server using psql client
$ psql
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Here is some info
$ ps ax | grep postgres | grep -v grep
52 ?? Ss 0:00.00 /opt/local/bin/daemondo --label=postgresql83-server --start-cmd /opt/local/etc/LaunchDaemons/org.macports.postgresql83-server/postgresql83-server.wrapper start ; --stop-cmd /opt/local/etc/LaunchDaemons/org.macports.postgresql83-server/postgresql83-server.wrapper stop ; --restart-cmd /opt/local/etc/LaunchDaemons/org.macports.postgresql83-server/postgresql83-server.wrapper restart ; --pid=none
Did you try running:
which psql
I imagine psql is still referencing /usr/bin/psql, and the macports version of psql is suffixed with the version number, in your case psql83. You can alias psql to psql83 as a simple workaround. Better would be to change the default:
sudo port select --set postgresql postgresql83
That will do the proper routing.
There is a very easy solution to this, but it's not well documented in my opinion:
MacPorts encourages installing their *_select ports to manage potentially multiple versions of software (say you want Postgres93 and Postgres94 at the same time). It's a great feature, but it adds an extra step that is for some reason rarely mentioned in the docs:
$ sudo port install postgresql94-server
Many failed attempts at starting the server later..
$ sudo port install postgresql_select
$ sudo port select postgresql
Available versions for postgresql:
none (active)
postgresql94
Well that can't be good!
$ sudo port select postgresql postgresql94
$ sudo port load postgresql94-server
You're kidding me. Now it's running?
Simply installing Postgres doesn't fully setup symlinks to make it easily runnable. Installing postrgresql_select gives MacPorts the information it needs to do that via port select. Once you've selected the active version of your choice, starting the Posgres server via luanchctl is as easy as port load postgresqlXX-server.
I know this is a very late answer and doesn't answer your full question, but launchctl will show different results depending on if you are superuser or not.
Try doing:
sudo launchctl list | grep postgres
I had exactly the same problem on my MacBook Pro. I could resolve the problem after I rode this blog post here and all the comments:
http://benscheirman.com/2010/06/installing-postgresql-for-rails-on-mac-os-x
The Problem is that postgres is not really running. I recognized this after I did a port scan to my own machine and realized that nothing is running on Port 5432.
I created a small script "start_pg_server.sh":
#!/bin/sh
sudo su postgres -c 'pg_ctl start -D /opt/local/var/db/postgresql83/defaultdb/'
after executing this script the server was running and I could connect me with pgAdmin. I was also able to run my ruby stuff with rake db:create and rake db:migrate.
After I restored using Timemachine I had the same problem.
The reason was that the permissions were mangled and postgres could not write the pid file.
Running this solved it for me:
sudo chown -R postgres:postgres /opt/local/var/db/postgresql91/
sudo port unload postgresql91-server
sudo port load postgresql91-server
Did you by any chance create your postgres user with a shell of /usr/bin/false? If so, the startup script won't work because it uses su which passes commands you send it through the shell.
If you did set it to /usr/bin/false, try changing it to /bin/bash and that might fix things.