How to `serverless upgrade` when receving `Error: EXDEV: cross-device link not permitted` - upgrade

In the course of trying to upgrade serverless, I received the following error.
Error: EXDEV: cross-device link not permitted, rename '/tmp/serverless-binary-tmp' -> '/home/<username>/.serverless/bin/serverless'
Looking into other similar errors/questions on SO, they point out that this error arises when trying to move files across partitions/devices; trouble is that /tmp is not a separate partition to /.

So I first tried looking into changing the /tmp folder location for serverless.com, but was unable to find documentation/options to that effect.
Fortunately, a manual copy of the file seems to have been the only missing step
cp '/tmp/serverless-binary-tmp' '/home/<username>/.serverless/bin/serverless'

Related

What causes error "Connection test failed: spawn npm; ENOENT" when creating new Strapi project with MongoDB?

I am trying to create a new Strapi app on Ubuntu 16.4 using MongoDB. After stepping through the tutorial, here: https://strapi.io/documentation/3.0.0-beta.x/guides/databases.html#mongodb-installation, I get the following error: Connection test failed: spawn npm; ENOENT
The error seems obvious, but I'm having issues getting to the cause of it. I've installed latest version of MongoDB and have ensured it is running using service mongod status. I can also connect directly using nc, like below.
$ nc -zvv localhost 27017
Connection to localhost 27017 port [tcp/*] succeeded!
Here is an image of the terminal output:
Any help troubleshooting this would be appreciated! Does Strapi perhaps log setup errors somewhere, or is there a way to get verbose logging? Is it possible the connection error would be logged by MongoDB somewhere?
I was able to find the answer. The problem was with using npx instead of Yarn. Strapi documentation states that either should work, however, it is clear from my experience that there is a bug when using npx.
I switched to Yarn and the process proceeded as expected without error. Steps were otherwise exactly the same.
Update: There is also a typo in Strapi documentation for yarn. They include the word "new" before the project name, which will create a project called new and ignore the project name.
Strapi docs (incorrect):
yarn create strapi-app new my-project
Correct usage, based on my experience:
yarn create strapi-app my-project
The ENOENT error is "an abbreviation of Error NO ENTry (or Error NO ENTity), and can actually be used for more than files/directories."
Why does ENOENT mean "No such file or directory"?
Everything I've read on this points toward issues with environment variables and the process.env.PATH.
"NOTE: This error is almost always caused because the command does not exist, because the working directory does not exist, or from a windows-only bug."
How do I debug "Error: spawn ENOENT" on node.js?
If you take the function that Jiaji Zhou provides in the link above and paste it into the top of your config/functions/bootstrap.js file (above module.exports), it might give you a better idea of where the error is occurring, specifically it should tell you the command it ran. Then run the command > which nameOfCommand to see what file path it returns.
"miss-installed programs are the most common cause for a not found command. Refer to each command documentation if needed and install it." - laconbass (from the same link, below Jiaji Zhou's answer)
This is how I interpret all of the above and form a solution. Put that function in bootstrap.js, then take the command returned from the function and run > which nameOfCommand. Then in bootstrap.js (you can comment out the function), put console.log(process.env.PATH) which will return a string of all the directories your current environment is checking for executables. If the path returned from your which command isn't in your process.env.PATH, you can move the command into a path, or try re-installing.

Understanding the error message: spdlog::spdlog_ex

I am aware this question is very specific. Nontheless, maybe s.o. can help:
I was trying to compile an open-source code today. (anyone, who's interested, that's the one.) The error message described below occurs after oai_hss -j $PREFIX/hss_rel14.json --onlyloadkey - having followed the step-by-step installation guide to this point.
After typing the aforementioned command in my terminal, the following error is thrown:
terminate called after throwing an instance of 'spdlog::spdlog_ex'
what(): Failed opening file logs/hss.log for writing: No such file or directory
Aborted (core dumped)
Allright, this sounds pretty severy (core dumped). I was searching google for a meaning of that error message. I came across this other github project. Apparently the spdlog class is trying to enable logging from wherever I run my program. And it throws an spdlog_ex error whenever the file it is trying to add to the registry (in this case logs/hss.log) already exists within this registry. So, I guess, the solution to my problem would be to find this registry and delete logs/hss.log. Does this make sense?
Question: Where the heck do I find this registry?
Maybe some background knowledge would be useful: I am trying to compile the open-source code within a VM that is running Ubuntu 18.04.3 LTS bionic with a 4.15.0-66-generic kernel.
I was searching the /tmp directory for a log folder already. There is none. Where else could it be?
open this file
sudo nano /usr/local/etc/oai/hss_rel14.json
you will see some config where you can find logs/hss.log
actually you have to change these 4 value to
logname: "/var/log/hss.log"
statlogname: "/var/log/hss_stat.log"
auditlogname: "/var/log/hss_audit.log"
ossfile: "~/openair-cn/etc/oss.json"
then use sudo touch to create these files
sudo touch /var/log/hss.log
sudo touch /var/log/hss_stat.log
sudo touch /var/log/hss_audit.log
for logname, statlogname, and auditlogname you can change it to whatever file you want but i like to put them together in /var/log folder.
for ossfile , the oss.json is actually in there.
hope this help

Creating symbolic links resulting in 500 error

Currently running a WHM / Cpanel server running Centos. Server seems to be running fine no issues there. However I'm using a deployment process to put files outside of the document root. e.g.
~/deployment
instead of:
~/public_html
Obviously I need to point public_html to this folder so my site will run. So, I'm removing the public_html and creating a symlink and pointing it to the new deployment folder. This results in a 500 error.
So looking at the logs I've discovered that it produces the following error:
Directory "/home/xyz/deployment" is writeable by group
Checking the file permissions looks as though the symlink is 777 where i need it to be 755 for the server to allow viewing.
Is there a setting in WHM ? Is there a setting in Centos? I have another box running that doesn't have this issue so I'm assuming that this is related to the current setup of this machine.
Any help would be appreciated, thanks.
when you create a hard link from a file or folder, This file/folder inherits the accesses and permissions of the original file/folder, and in soft link it will be 777 permission, so i think you can use rsync options for both purpose :
1- have a folder with all files in source
2- have your own permissions in folder

PostgreSQL 9.x - pg_read_binary_file & inserting files into bytea

I have been looking everywhere (google, stackoverflow, etc.) for some documentation on how to use the PostgreSQL pg_read_binary_file() function.
The only meaningful thing I can find is this page in the official documentation.
Every time I try to use this function I get an error.
For example:
SELECT pg_read_binary_file('/some/path/and/file.gif');
ERROR: absolute path not allowed
or
SELECT pg_read_binary_file('file.gif');
ERROR: could not stat file "file.gif": No such file or directory
Do I need to have my file in a specific directory for Postgres to have access to it? If so what directory?
If it matters, the reason I am looking at this function is because I am trying to insert a file into the database without doing crazy things.
As stated by #a_horse_with_no_name and #guedes the solution is to ensure that the file being uploaded is on the server in the PGDATA directory.
The postgres documentation does state the file location as a requirement.
Additionally, I made a symlink from another directory to the PGDATA directory so that I would not disturb any of the postgres data structure. This seems to be working well and I don't have to do any of the above crazy things.

PostgreSQL issue: could not access file "$libdir/plpgsql": No such file or directory

I get this exception in PostgreSQL:
org.postgresql.util.PSQLException: ERROR: could not access file "$libdir/plpgsql": No such file or directory
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1721)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1489)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:193)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:452)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:337)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:236)
at org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:205)
I searched a lot and most solution points to a wrong installation. But this is my test db which has been running without issues for a long time. Also inserts are working. Issue occurs only on select queries.
Apparently, you moved your PostgreSQL lib directory out of place. To confirm this, try the following in psql:
> SET client_encoding TO iso88591;
ERROR: could not access file "$libdir/utf8_and_iso8859_1": No such file or directory
If you get an error message like this, then my theory is correct. You'll need to find out where those files ended up, or you can reinstall PostgreSQL to restore them.
To find out what $libdir is referring to, run the following command:
pg_config --pkglibdir
For me, this produces:
/usr/lib/postgresql
I have the same problem: the other postgres server instance (8.4) was interfering with the 9.1 one; when the 8.4 instance is removed it works.
the other instance can sometimes be removed from the system while still running (e.g. you do a gentoo update and a depclean without stopping and migrating your data). so the error seems particularly mysterious.
the solution is usually going to be doing a slot install/eselect of the old version (in gentoo terms, or simply downgrading on other distros), running its pg_dumpall, and then uninstalling/reinstalling the new version and importing the data.
this worked pretty painlessly for me