I am trying to permanently mount a volume via sshfs on mac. I have tried to follow the instructions in how-to-get-automount-and-sshfs-osxfuse-working-with-yosemite (Although I have Sierra, I couldn't find instructions for it so I thought to give it a try with Yosemite instructions). However I get stuck at this step:
If you do not see mount_sshfs, then you need to do this step. This
is a critical step because it is easily forgotten and may create
headaches. sudo ln -s $(which sshfs) /sbin/mount_sshfs.
Here is the error:
$ sudo ln -s $(which sshfs) /sbin/mount_sshfs
ln: /sbin/mount_sshfs: Operation not permitted
I couldn't find the way to solve this.
Apple protects some critical folders by "System Integrity Protection (SIP)", you can temporarily disable it with the instructions given here
https://apple.stackexchange.com/questions/208478/how-do-i-disable-system-integrity-protection-sip-aka-rootless-on-macos-os-x
Related
I am using Neo4j on a remote server (ubuntu 20.4) and would like to stream data from MongoDB to Neo4j. I followed the instructions here. I tried both ways by using the following approaches:
Use the following command:
sudo wget https://github.com/neo4j-contrib/neo4j-apoc-procedures/releases/tag/4.3.0.7/apoc-mongodb-dependencies-4.3.0.7.jar -O /mnt/neo4j/plugins/apoc-mongodb-dependencies-4.3.0.7.jar
Note that the plugins directory has a different path due to mounting. I changed the path in the configuration file accordingly. This should not be causing any problems because I had the same problem before mounting.
Also, I tried to match the same release as the apoc-core file (4.4.0.3) in a separate attempt with no better outcome.
Changing the ownership and read permissions as follows didn't help either:
sudo chown neo4j:neo4j apoc-mongodb-dependencies-4.4.0.3.jar
sudo chmod 755 apoc-mongodb-dependencies-4.4.0.3.jar
Use the following commands:
sudo wget https://repo1.maven.org/maven2/org/mongodb/mongo-java-driver/3.12.11/mongo-java-driver-3.12.11.jar -O /mnt/neo4j/plugins/mongo-java-driver-3.12.11.jar
sudo wget https://repo1.maven.org/maven2/org/mongodb/mongodb-driver/3.12.11/mongodb-driver-3.12.11.jar -O /mnt/neo4j/plugins/mongodb-driver-3.12.11.jar
sudo wget https://repo1.maven.org/maven2/org/mongodb/mongodb-driver-core/4.7.1/mongodb-driver-core-4.7.1.jar -O /mnt/neo4j/plugins/mongodb-driver-core-4.7.1.jar
sudo wget https://repo1.maven.org/maven2/org/mongodb/bson/4.7.1/bson-4.7.1.jar -O /mnt/neo4j/plugins/bson-4.7.1.jar
Note that I used the latest versions. I tried the versions available in the instructions as well with no difference in the outcome.
Now when restarting the neo4j.service, I no longer can access the cypher-shell nor the browser. In the first case, I get "connection refused", while I get a blank page in the browser case. When I check the status, the service is active and running. But I noticed that it is missing a line compared to when I don't have the dependencies.
Starting...
This instance is ServerId{#}
======== Neo4j 4.4.5 ======== (This line is missing with the dependencies downloaded!)
When I delete the dependencies from the plugins directory and restart, everything goes back to normal and functions as expected. One more thing to note is that apoc-core procedures work just fine!
I don't know if I'm doing something wrong here or if there is some sort of underlying problem!
So I want to install XMrig on the RPI, I happen to find the following article
https://dev.to/ijason/cpu-mining-on-a-raspberry-pi-1e1d
I wanted to know if anything in there is not written, I do have a pool ID and everything I just don't know if any packages contain any damaging effects to my RPI. (Reason, why I am mining, is for experimental purposes I know I won't gain much)
Submit files to Virustotal:
Virustotal website
The website search the cybersecurity community uploads and check if any of the binaries or URLs were already reported as malicious.
Also, you can use ShiftLeftScan for Python code, Github code, etc:
wget https://github.com/ShiftLeftSecurity/sast-scan/releases/download/v1.9.27/scan
chmod +x scan
sh <(curl https://slscan.sh)
sudo apt install docker.io
sudo systemctl enable --now docker
sudo usermod -aG docker USER
sudo docker run --rm -e "WORKSPACE=${PWD}" -v "$PWD:/app" shiftleft/sast-scan scan
https://github.com/ShiftLeftSecurity/sast-scan
I cannot execute any command using sudo. I get this error
-sh: sudo: command not found
First: if you are already root, you do not need sudo.
Second: if this is a yocto-based image as the question tag suggests, then there is no apt-get either. This is the "debianoid" way of installing things and does not apply to prebuilt-image based distributions as yocto provides them. So you have two options:
Change to ubuntu or debian (or any derivative thereof), then this approach will apply.
Use the yocto/OpenEmbedded way of installing things. This is unfortunately not exactly trivial, so you better get started here then: Yocto Projct Quick Start
Maybe you need to check the user you log in.
If you are the root user, you have the super access right yet.
If not, you need to change your configuration in your yocto project like this
EXTRA_USERS_PARAMS = "\
usermod -p 'passowrd' root; \
"
I want to install PostgreSQL for a node project that I'm developing in OSX Yosemite. I use MacPorts and so tried the method described here: https://github.com/codeforamerica/ohana-api/wiki/Installing-PostgreSQL-with-MacPorts-on-OS-X
...but I get an error during step 2:
$ sudo gem install pg -- --with-pg-config=/opt/local/lib/postgresql93/bin/pg_config > ruby_error
ERROR: Error installing pg:
ERROR: Failed to build gem native extension.
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/bin/ruby extconf.rb --with-pg-config=/opt/local/lib/postgresql93/bin/pg_config
Using config values from /opt/local/lib/postgresql93/bin/pg_config
checking for libpq-fe.h... yes
checking for libpq/libpq-fs.h... yes
checking for pg_config_manual.h... yes
checking for PQconnectdb() in -lpq... no
checking for PQconnectdb() in -llibpq... no
checking for PQconnectdb() in -lms/libpq... no
Can't find the PostgreSQL client library (libpq)
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers. Check the mkmf.log file for more details. You may
need configuration options.
...thinking that I may not need to install the pg gem since I want to work with Node and not Ruby, I moved on to the next steps. But there I run into an error during step 3.3:
$ sudo su postgres -c '/opt/local/lib/postgresql93/bin/initdb -D /opt/local/var/db/postgresql93/defaultdb'
shell-init: error retrieving current directory: getcwd: cannot access parent directories: Permission denied
could not identify current directory: Permission denied
could not identify current directory: Permission denied
could not identify current directory: Permission denied
The program "postgres" is needed by initdb but was not found in the
same directory as "initdb".
Check your installation.
...checking my /opt/local/lib/postgresql93/bin/ directory, I see both initdb and postgres. I see those lines saying Permission denied and am wondering what that's about.
Not sure how to progress. Thinking of using Postgres.app if it really is easier but not sure whether it would be better to install using MacPorts since I install most other things using MacPorts. Tips about any of my problems are appreciated!
The permissions/ownership on the directories between / and defaultdb likely need to be fixed. I think PostgreSQL can be sensitive to ownership of these, though it seems that in your case PostgreSQL simply doesn't have access to those. This is what I have for each directory.
$ ls -hlt /opt/local/var/db/
total 0
drwxr-xr-x 7 root admin 238B Jan 23 16:54 texmf
drwxr-xr-x 3 root admin 102B Dec 25 07:37 postgresql94
You could fix permissions by doing sudo chmod a+rx /opt/local/var/db/ as needed.
For the defaultdb directory itself, you should follow the instructions that you link to, which seem to have the same as I have:
sudo chown postgres:postgres /opt/local/var/db/postgresql93/defaultdb
Below are instructions adapted from my blog (though I recommend using PostgreSQL 9.4, which I now do). I've been running PostgreSQL using MacPorts since 9.1 without major problems.
1. Install PostgreSQL using MacPorts.
Of course, I assume you’ve got MacPorts up and running on your system.
sudo port install postgresql93 +perl +python27
sudo port install postgresql93-server
2. Set up PostgreSQL
I first need to initialize the database cluster and then get the server running. The following comes straight from the on-screen instructions supplied with the MacPorts port postgresql93-server.
sudo mkdir -p /opt/local/var/db/postgresql93/defaultdb
sudo chown postgres:postgres /opt/local/var/db/postgresql93/defaultdb
sudo su postgres -c '/opt/local/lib/postgresql93/bin/initdb -D /opt/local/var/db/postgresql93/defaultdb'
Note that MacPorts creates a launch daemon. To load it now and to make sure it launches on system start, do:
sudo defaults write /Library/LaunchDaemons/org.macports.postgresql93-server.plist Disabled -bool false
sudo launchctl load /Library/LaunchDaemons/org.macports.postgresql93-server.plist
I then use psql for some set-up to get my database going.
sudo su - postgres
/opt/local/lib/postgresql93/bin/psql -U postgres -d template1
If you get to here, then you have PostgreSQL running on your system.
I had the same issue when attempting to run initdb, even when following the description by Ian Gow:
$ sudo su postgres -c '/opt/local/lib/postgresql94/bin/initdb -D /opt/local/var/db/postgresql94/defaultdb'
shell-init: error retrieving current directory: getcwd: cannot access parent directories: Permission denied
could not identify current directory: Permission denied
could not identify current directory: Permission denied
could not identify current directory: Permission denied
The program "postgres" is needed by initdb but was not found in the
same directory as "initdb".
Check your installation.
Turns out that user postgres cannot do anything if you try to make it run a command from within your own home directory, because in there postgres is not allowed to read its own location and hence cannot figure out any other paths, either. So the simple solution is to run cd / before any command that must be run as postgres (initdb, pg_ctl, etcetera). Afterwards, you can quickly jump back to your previous working directory using cd -.
i'm working in a instance at us-central1-a zone and I can't copy a ~200GB file.
i've tried :
gsutil -m cp -L my.log my.file gs://my-bucket/
gsutil -m cp -L my.second.log my.file gs://my-bucket2/
And after several "catch ups" I get the following error:
CommandException: Some temporary components were not uploaded successfully. Please retry this upload.
CommandException: X files/objects could not be transferred.
Any clues?
Thanks
This is a message you'll see if gsutil's parallel composite uploads feature fails to upload at least one of the pieces of the file.
A couple of questions...
Have you already tried performing this upload again, after you saw this message?
If this error persists, could you please provide the stack trace from gsutil -d cp...
If you're consistently seeing this error and need an immediate fix (if this is a bug with parallel uploads), you can set parallel_composite_upload_threshold=0 in the GSUtil section your boto config to disable parallel uploads.
I had the same experience using gsutil. I fixed by installing the crcmod.
First run the command you have issues with using the debug flag, for example:
gsutil -d -m cp gs://<path_to_file_in_bucket>
In the output I can see:
CommandException: Downloading this composite object requires integrity checking with CRC32c, but your crcmod installation isn't using the module's C extension, so the hash computation will likely throttle download performance. For help installing the extension, please see "gsutil help crcmod".
To download regardless of crcmod performance or to skip slow integrity checks, see the "check_hashes" option in your boto config file.
NOTE: It is strongly recommended that you not disable integrity checks. Doing so could allow data corruption to go undetected during uploading/downloading.
You can follow the instructions here from google to install crcmod for your specific os: https://cloud.google.com/storage/docs/gsutil/addlhelp/CRC32CandInstallingcrcmod
I got the same error message. I tried login in to gcloud again with
gcloud auth login
and then I could run the command successfully.