fuse: warning: library too old, some operations may not not work - centos

I'm trying to mount my s3 bucket to my server.(Centos 6) and when a run the following command
s3fs -o use_cache=/tmp/cache localdir bucket-name
I'm getting an error
fuse: warning: library too old, some operations may not not work

What version of fuse are you using?
Try to install manually 2.8.4, see instructions:
https://kisdigital.wordpress.com/2011/08/04/installing-s3fs-on-rhelcentos/

Related

Mongo Procedures Dependencies Cause Neo4j Connection Issues

I am using Neo4j on a remote server (ubuntu 20.4) and would like to stream data from MongoDB to Neo4j. I followed the instructions here. I tried both ways by using the following approaches:
Use the following command:
sudo wget https://github.com/neo4j-contrib/neo4j-apoc-procedures/releases/tag/4.3.0.7/apoc-mongodb-dependencies-4.3.0.7.jar -O /mnt/neo4j/plugins/apoc-mongodb-dependencies-4.3.0.7.jar
Note that the plugins directory has a different path due to mounting. I changed the path in the configuration file accordingly. This should not be causing any problems because I had the same problem before mounting.
Also, I tried to match the same release as the apoc-core file (4.4.0.3) in a separate attempt with no better outcome.
Changing the ownership and read permissions as follows didn't help either:
sudo chown neo4j:neo4j apoc-mongodb-dependencies-4.4.0.3.jar
sudo chmod 755 apoc-mongodb-dependencies-4.4.0.3.jar
Use the following commands:
sudo wget https://repo1.maven.org/maven2/org/mongodb/mongo-java-driver/3.12.11/mongo-java-driver-3.12.11.jar -O /mnt/neo4j/plugins/mongo-java-driver-3.12.11.jar
sudo wget https://repo1.maven.org/maven2/org/mongodb/mongodb-driver/3.12.11/mongodb-driver-3.12.11.jar -O /mnt/neo4j/plugins/mongodb-driver-3.12.11.jar
sudo wget https://repo1.maven.org/maven2/org/mongodb/mongodb-driver-core/4.7.1/mongodb-driver-core-4.7.1.jar -O /mnt/neo4j/plugins/mongodb-driver-core-4.7.1.jar
sudo wget https://repo1.maven.org/maven2/org/mongodb/bson/4.7.1/bson-4.7.1.jar -O /mnt/neo4j/plugins/bson-4.7.1.jar
Note that I used the latest versions. I tried the versions available in the instructions as well with no difference in the outcome.
Now when restarting the neo4j.service, I no longer can access the cypher-shell nor the browser. In the first case, I get "connection refused", while I get a blank page in the browser case. When I check the status, the service is active and running. But I noticed that it is missing a line compared to when I don't have the dependencies.
Starting...
This instance is ServerId{#}
======== Neo4j 4.4.5 ======== (This line is missing with the dependencies downloaded!)
When I delete the dependencies from the plugins directory and restart, everything goes back to normal and functions as expected. One more thing to note is that apoc-core procedures work just fine!
I don't know if I'm doing something wrong here or if there is some sort of underlying problem!

Unable to debug java app through stack driver in google kubernetes cluster

I am trying to debug a java app on GKE cluster through stack driver.
I have created a GKE cluster with Allow full access to all Cloud APIs
I am following documentation: https://cloud.google.com/debugger/docs/setup/java
Here is my DockerFile:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} alnt-watchlist-microservice.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/alnt-watchlist-microservice.jar"]
In documentation, it was written to add following lines in DockeFile:
RUN mkdir /opt/cdbg && \
wget -qO- https://storage.googleapis.com/cloud-debugger/compute-java/debian-wheezy/cdbg_java_agent_gce.tar.gz | \
tar xvz -C /opt/cdbg
RUN java -agentpath:/opt/cdbg/cdbg_java_agent.so
-Dcom.google.cdbg.module=tpm-watchlist
-Dcom.google.cdbg.version=v1
-jar /alnt-watchlist-microservice.jar
When I build DockerFile, It fails saying tar: invalid magic , tar: short read.
In stackdriver debug console, It always show 'No deployed application found'. Which application it will show? I have already 2 services deployed on my kubernetes cluster.
I have already executed
gcloud debug source gen-repo-info-file --output-directory="WEB-INF/classes/
in my project's directory.
It generated source-context.json. After its creation, I tried building docker image and its failing.
The debugger will be ready for use when you deploy your containerized app. You are getting No deployed application found error because your debugger agent is failing to download or unzip in dockerfile.
Please check this discussion to resolve the tar: invalid magic , tar: short read. error.
Unfortunately it looks like Alpine isn't regularly tested with Debugger. There's a sample setup here that might help you: https://github.com/GoogleCloudPlatform/cloud-debug-java#alpine-linux
I resolved the issue.
Firstly, you will have to use java image "gcr.io/google-appengine/openjdk" instead of Alpine one.
Secondly,
I was putting entry points without comma separated (Basically in wrong format)
ENTRYPOINT ["java","-agentpath:/opt/cdbg/cdbg_java_agent.so", "-Djava.security.egd=file:/dev/./urandom" ,"-Dcom.google.cdbg.module=watchlist"]

Drush cannot locate mysql on localhost MAMP

Using drush commands to update Drupal 8 Core on a localhost build in MAMP, I've found that drush won't acknowledge my mysql.
From reading a few threads apparently this is due to MAMP's default locations for MYSQL location not being compatible with drush's expectation.
I've followed a few forum suggestions for fixed but so far have not had any luck.
The Latest attempt gives me this permission error:
[warning] The command 'mysql' is required for preflight but cannot be found.
Please install it and retry. Drush Commandline Tool 9.2.3
Other attempts:
I followed the suggestion from March 14th on this thread:
https://github.com/drush-ops/drush/issues/3464
which gave me this error:
[info] Executing: mysql --defaults-file=/private/tmp/drush_iBYWVg --database=drupal20180405 --host=localhost --port=3306 --silent < /private/tmp/drush_7T1mwj [info] Executing: mysql --defaults-file=/private/tmp/drush_bvCyn3 --database=drupal20180405 --host=localhost --port=3306 --silent < /private/tmp/drush_a9aRha In Connection.php line 149: [PDOException (2002)] SQLSTATE[HY000] [2002] No such file or directory
Another potential solution I tried came from Chrisblomm's answer on this thread:
Drush cannot connect to MySQL on MAMP?
Unfortunately for me that triggered the first error again:
[warning] The command 'mysql' is required for preflight but cannot be found.
Please install it and retry. Drush Commandline Tool 9.2.3
UPDATE: I found a solution here:
Andrew Patton's comments on this thread solved it for me:
https://stackoverflow.com/a/29990624/2639928
Specifically his tips to "define and export mysql and mysqladmin as functions".
Once I added his suggested lines of code to to my Mac's local .bash_profile it then allowed drush to correctly identify the mysql.
This meant I was able to use all the drush commands I needed that had previously triggered drush errors.
Andrew Patton's comments here solved it for me:
https://stackoverflow.com/a/29990624/2639928
specifically his tips to "define and export mysql and mysqladmin as functions"
once I added that to my mac / user / .bash_profile my drush acknowledged the mysql and I was able to use all the commands I needed that had previously given me drush errors.
I have the same issue in my php container
[warning] The shell command 'mysql' is required but cannot be found. Please install it and retry.
The mysql client was not installed so to fix it I added mysql client
apt-get install -y default-mysql-client

How can install sensu-plugins through a proxy or by offline package

I am using Debian 7 amd64, sensu (version 0.28.4-1).
I install sensu-plugins through a proxy with the command:
/opt/sensu/embedded/bin/gem install sensu-plugins-redis --user-install --no-document -p http://myproxy:3128 --verbose -s https://rubygems.org/
But have an error:
ERROR: While executing gem ... (Errno::EPERM)
Operation not permitted - send(2)
When I try to directly install it in a server, which have directly internet connection, everything is successful.
I don't know why.
I also find key work: install sensu-plugins by offline package, but not have recommendation.
Please help me, thanks so much.

gsutil copy to storage failing

i'm working in a instance at us-central1-a zone and I can't copy a ~200GB file.
i've tried :
gsutil -m cp -L my.log my.file gs://my-bucket/
gsutil -m cp -L my.second.log my.file gs://my-bucket2/
And after several "catch ups" I get the following error:
CommandException: Some temporary components were not uploaded successfully. Please retry this upload.
CommandException: X files/objects could not be transferred.
Any clues?
Thanks
This is a message you'll see if gsutil's parallel composite uploads feature fails to upload at least one of the pieces of the file.
A couple of questions...
Have you already tried performing this upload again, after you saw this message?
If this error persists, could you please provide the stack trace from gsutil -d cp...
If you're consistently seeing this error and need an immediate fix (if this is a bug with parallel uploads), you can set parallel_composite_upload_threshold=0 in the GSUtil section your boto config to disable parallel uploads.
I had the same experience using gsutil. I fixed by installing the crcmod.
First run the command you have issues with using the debug flag, for example:
gsutil -d -m cp gs://<path_to_file_in_bucket>
In the output I can see:
CommandException: Downloading this composite object requires integrity checking with CRC32c, but your crcmod installation isn't using the module's C extension, so the hash computation will likely throttle download performance. For help installing the extension, please see "gsutil help crcmod".
To download regardless of crcmod performance or to skip slow integrity checks, see the "check_hashes" option in your boto config file.
NOTE: It is strongly recommended that you not disable integrity checks. Doing so could allow data corruption to go undetected during uploading/downloading.
You can follow the instructions here from google to install crcmod for your specific os: https://cloud.google.com/storage/docs/gsutil/addlhelp/CRC32CandInstallingcrcmod
I got the same error message. I tried login in to gcloud again with
gcloud auth login
and then I could run the command successfully.