Last week I managed to successfully deploy an AWS Lambda function (verified in the AWS console). This morning, I can no longer update the Lambda function. After deleting the Lambda function and pushing changes again, the Lambda was still not able to be created. Instead I get the following traceback:
Build Failed
Error: PythonPipBuilder:ResolveDependencies - The directory '/github/home/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/github/home/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Invalid requirement: 'Warning: The lock flag'
In the workflow deploy file:
- name: Build Lambda image
run: sam build
I don't know exactly what has changed to now cause this error. I tried the flag --use-container which successfully moves on the next step of deploying the Lambda image, however there I now encounter further error messages. I'd like to understand why before adding the --user-container flag, I didn't encounter this error. Is this --use-container flag necessary when not using the sam cli?
Further info
Building via the sam cli tool works, not when pushed via the Github actions workflow.
Related
I am trying to setup github ci cd with hasura...
I did everything as document said so, but since I am applying change locally on database, on cloud deployment it is saying table already exist while applying migration (which is logically correct).
now I want to avoid, skip or sync migration between cloud and local for that hasura mentioned a command in same doc.
While executing this command I am getting resource not found error
command: hasura migrate apply --skip-execution --version 1631602988318 --endpoint "https://customer-support-dev.hasura.app/v1/graphql" --admin-secret 'mySecretKey'
error: time="2021-09-14T20:44:19+05:30" level=fatal msg="{\n \"path\": \"$\",\n \"error\": \"resource does not exist\",\n \"code\": \"not-found\"\n}"
This was a silly mistake --endpoint must not contain URL path. So its value will be https://customer-support-dev.hasura.app.
I'm trying to import a local Postgresql database to Heroku and I'm following these steps https://devcenter.heroku.com/articles/heroku-postgres-import-export#import-to-heroku-postgres.
I have successfully:
created a dump
uploaded it to an S3 Bucket
created from AWS CLI a signed link
ran the command heroku pg:backups:restore '<SIGNED URL>' DATABASE_URL (adding -a with my app name).
The process to restore a backup starts correctly but then exits with this code:
! An error occurred and the backup did not finish.
!
! Could not initialize transfer
!
! Run heroku pg:backups:info r011 for more details.
Opening the log shows:
Database: BACKUP
Finished at: 2020-01-09 18:49:30 +0000
Status: Failed
Type: Manual
Backup Size: 0.00B (0% compression)
=== Backup Logs
2020-01-09 18:49:30 +0000 Could not initialize transfer
I've tried:
re-uploading the file to the bucket,
generating a new signed link,
putting the app in maintenance mode,
I've created a user in my IAM management service with full S3 access and saved the credentials in the app environment as from https://devcenter.heroku.com/articles/s3
Not sure where to go from here but would appreciate any help. (I'm on the hobby plan therefore I can't ask Heroku's support for help)
Edit: I also tried:
deleting and recreating the S3 Bucket
installing version 1 of the AWS CLI to see if by chance the structure of a presigned link had changed
Edit 2: Since I could not find a solution I've opted to migrate the hosting entirely on AWS for the moment
Make sure that your credentials on your machine that are stored in ~/.aws/ the default value is set to the credentials you created for your heroku configs. Then also make sure the signed url is created with those credentials and configs. I had to set my default credentials to the credentials I put in my heroku configs. Then I also had to set my default region in ~/.aws/config to match the bucket location. Should work after that.
Here are some instructions if you are on mac or linux.
Sorry Windows people. I would assume it is something similar.
Create new access id and key in IAM on AWS
Set heroku configs to use those credentials heroku config:set AWS_ACCESS_KEY_ID=xxx AWS_SECRET_ACCESS_KEY=yyy
Optional (You may have to set the bucket name in heroku config too)
On your machine set your credentials you just created to the default in ~/.aws/credentials
On your machine set your default region that corresponds to your bucket in ~/.aws/config
Create signed URL aws s3 presign s3://your-bucket-address/your-object
Run restore heroku pg:backups:restore '<SIGNED URL>' DATABASE_URL
Had the exact same error and made these 2 adjustments. In the S3 console click on the file you want to use for the backup. You should see the name fo your file followed by 4 tabs. In the General information tab, do the following:
Click on Make public to make the file available for download.
Get the URL for that object where it says URL of object
(should be something like https://mybucket.s3.amazonaws.com/my.file, you can test if it works by pasting that url in a new Chrome tab and hitting that url. That should trigger the download of your file)
Once the previous check is working you can proceed to
heroku pg:backups restore 'https://mybucket.s3.amazonaws.com/my.file' DATABASE_URL
I ran into the same issue and discovered the issue was that I had my bucket's region set as us-east rather than us-east-1.
I am trying to create a new Strapi app on Ubuntu 16.4 using MongoDB. After stepping through the tutorial, here: https://strapi.io/documentation/3.0.0-beta.x/guides/databases.html#mongodb-installation, I get the following error: Connection test failed: spawn npm; ENOENT
The error seems obvious, but I'm having issues getting to the cause of it. I've installed latest version of MongoDB and have ensured it is running using service mongod status. I can also connect directly using nc, like below.
$ nc -zvv localhost 27017
Connection to localhost 27017 port [tcp/*] succeeded!
Here is an image of the terminal output:
Any help troubleshooting this would be appreciated! Does Strapi perhaps log setup errors somewhere, or is there a way to get verbose logging? Is it possible the connection error would be logged by MongoDB somewhere?
I was able to find the answer. The problem was with using npx instead of Yarn. Strapi documentation states that either should work, however, it is clear from my experience that there is a bug when using npx.
I switched to Yarn and the process proceeded as expected without error. Steps were otherwise exactly the same.
Update: There is also a typo in Strapi documentation for yarn. They include the word "new" before the project name, which will create a project called new and ignore the project name.
Strapi docs (incorrect):
yarn create strapi-app new my-project
Correct usage, based on my experience:
yarn create strapi-app my-project
The ENOENT error is "an abbreviation of Error NO ENTry (or Error NO ENTity), and can actually be used for more than files/directories."
Why does ENOENT mean "No such file or directory"?
Everything I've read on this points toward issues with environment variables and the process.env.PATH.
"NOTE: This error is almost always caused because the command does not exist, because the working directory does not exist, or from a windows-only bug."
How do I debug "Error: spawn ENOENT" on node.js?
If you take the function that Jiaji Zhou provides in the link above and paste it into the top of your config/functions/bootstrap.js file (above module.exports), it might give you a better idea of where the error is occurring, specifically it should tell you the command it ran. Then run the command > which nameOfCommand to see what file path it returns.
"miss-installed programs are the most common cause for a not found command. Refer to each command documentation if needed and install it." - laconbass (from the same link, below Jiaji Zhou's answer)
This is how I interpret all of the above and form a solution. Put that function in bootstrap.js, then take the command returned from the function and run > which nameOfCommand. Then in bootstrap.js (you can comment out the function), put console.log(process.env.PATH) which will return a string of all the directories your current environment is checking for executables. If the path returned from your which command isn't in your process.env.PATH, you can move the command into a path, or try re-installing.
On a machine running apache there are 3 rails apps running, all use capistrano for deployment, and they use passenger. The deployment scripts are standard. My predecessor set up the user deploy in textbook style, all 3 projects use the same capistrano version (3.8), and use the same directory structure on the server. All sit in a common directory within the deploy user's home. All use passenger, the same ruby and rails versions. They shared some linked directories, but otherwise the deployment scripts are all pretty trivial. My predecessor insists that in his times the deployment worked. We have still got his machine, and deployment does not work from his machine either-- only of this one project.
On cap production deploy of one of the three projects I always get a Errno::EACCES: Permission denied # dir_initialize - /tmp/passenger.h6D8mJy/apps.s error.
My only workaround is the following:
I log in on the production server, make myself superuser and change the owner of the /tmp directory to deploy. Then I run the deployment script, and it succeeds. (Then of course I change the owner of the directory back to root.)
So it seems as if the /tmp/passenger.something directory has set its owner wrong. Somehow I don't think this can be the problem though, since the other two scripts use the same directory and don't have this problem. Or do they? Who creates this directory and why, and where is ownership of this directory configured?
I thought it best if I just included the log, but I had to cut all of it... stackOverflow rejected my post because "this looks like spam").
INFO [e1c2bb25] Running ~/.rvm/bin/rvm ruby-2.3.1 do bundle exec rake assets:precompile as deploy#99.999.99.999
DEBUG [e1c2bb25] Command: cd /home/deploy/projects/external-services/releases/20190423082459 && ( export RAILS_ENV="production" ; ~/.rvm/bin/rvm ruby-2.3.1 do bundle exec rake assets:precompile )
DEBUG [56ee67e8] rake aborted!
Errno::EACCES: Permission denied # dir_initialize - /tmp/passenger.h6D8mJy/apps.s
We have a read-only Postgresql database that should run in Openshift cluster.
We are using RHEL as the undrlying operating system.
Our Dockerfile will install postgres software, create the database instance, loads the data to it than shuts the database down and save the image.
We are using only bash and sql scripts and deploy the database using flyway.
When starting the container the entrypoint script will simply startup the database instance using "pg_ctl" command then perform an endless loop to keep the container running.
The Dockerfile has as the last command USER 26, where 26 is the id of the postgres user. The entrypoint script can be started as the postgres user or by a sudo user.
Everything is working well in Docker.
In Openshift the container is started by a different user belonging to the root group, but not the root user nor the user 26. Actually Openshift ignores the USER 26 clause in the Dockerfile.
The user starting the container (we'll call it containeruser) has anyhow no rights to start the postgres instance , so when running the entrypoint it will get permission denied on the postgresql data folder.
I have tried different options, adding the containeruser user to the wheel group and modify the sudoers file to allow it using sudo and start the entrypoint as postgres user but with no success.
So I have my database image ready but can not start it in Openshift.
On the openshift configuration side we are not allowed to make changes like allowing sudo usage, or starting the container as root or postgres user.
Any idea or help to this problem?
I am not an Openshift expert.
Thank you!
Best regards,
rimetnac
You have two choices.
The preferred choice is to fix your image so that it can run as any user. For this, do not use the existing postgres user. Create a new user, where that user has group root. Then ensure that all directories/files that PostgreSQL needs to write to are owned by that user, but also have group root and writable by group. When the container is then started up, it will run as an assigned user ID, not in /etc/passwd, and so will fallback to using group root still. Because the directories/file are writable to group root, everything will still work. For more information see:
https://docs.openshift.org/latest/creating_images/guidelines.html
Specifically, section 'Support Arbitrary User IDs'.
The second option if you have admin control of the cluster, and your security team do not object that you are overriding the default security model, is to allow your image to be run as user ID it wants to.
First create a new service account:
oc create serviceaccount runasnonroot
Next grant that service account the ability to run as non root user ID of its choosing.
oc adm policy add-scc-to-user nonroot -z runasnonroot --as system:admin
Then patch the deployment config to use that service account.
oc patch dc/mydatabase --patch '{"spec":{"template":{"spec":{"serviceAccountName": "runasnonroot"}}}}'
Note that this still requires you use USER in the image with an integer user ID and not postgres. Otherwise it can't verify it will run as non root user. That is because if you use a user name instead of user ID, you could be maliciously mapping that to root.
I spent days figuring this out and found one good solution.
OpenShift Origin runs an image as a user created by it, as explained in this OpenShift blog post. This prevents programs from being able to access needed files and directories. To successfully run a program on OpenShift Origin, the blog post provides two solutions, however, the first will not work for PostgreSQL and the second has two disadvantages (explained in the notes):
Grant group write access to the directories used by the main program.
This will not solve the problem because, although the PostgreSQL files will be accessible by any program, they must be owned by the owner of the PostgreSQL process.
Ensure that when operating system libraries are used to look up a system user, one is returned for the ID of the user OpenShift Origin runs the image as. The following are two methods for doing this:
Use a package called nss_wrapper, "which intercepts any calls which look up details of a user and returns a valid entry."
Make the UNIX password database file (/etc/passwd) have global write permissions in the image build so that the OpenShift user can be added to it in the S2I run script.
Each option has a disadvantage: 1. install an extra package and 2. make user accounts insecure.
The best solution is to build the docker image to run as the user OpenShift Origin will run the image as. I built this instructional image with it.
One additional problem to note is that, as the owner of the PostgreSQL process must be the owner of the files and directories accessed by PostgreSQL, PostgreSQL must be set up (i.e. initdb, roles, databases, etc.) during the image build. This is because file ownership can only be changed during the image build and the ownership of the files must be changed after PostgreSQL has been set up for the reason explained in #2 below.
Here are the complete steps with notes for setting up PostgreSQL in the image build:
Manually create the PostgreSQL data directory and change its ownership to a non-root user that will be used to initialze PostgreSQL and set up the components (e.g. roles and databases) required to run the server on OpenShift Origin.
This is required because the "initdb" executable must be executed by a user other than root and will need access to the data directory. Additionally, this user cannot be the user OpenShift Origin will run the image as because it is not in the system.
Switch to the non-root user.
This is required because the initdb executable must be executed "as the user that will own the server process, because the server needs to have access to the files and directories that initdb creates" (PostgreSQL documentation) and because the PostgreSQL server will be started to set up components (e.g. roles and databases) required to run the server on OpenShift Origin.
Run the "initdb" executable.
Start the PostgreSQL server, set up the required components (roles, databases, etc.) and stop the PostgreSQL server.
Switch back to the root user.
Change the ownership of the PostgreSQL files and directories to the user OpenShift Origin will run the image as.
Edit (06/20/18): I have found that there is a solution to set up PostgreSQL after the image is built. The user OpenShift Origin will run the image as can be added to the system at the start of the build. This will allow PostgreSQL to be set up and the ownership of its files and directories to be changed after the image build.
After gathering the comments from all contributors I can asnwer my question as follows:
Option 1
When you create the postgres database during image build, you must configure openshift policies to allow starting your container as the user that created the database during image build. Use this option when the database must be filled with data and this operation takes much time making it inappropriate for a container start. the entrypoint will only start the already prepared database.
Option 2
Create your database when starting the container using the entrypoint script. Use this option when the database creation is fast enough to be done at container start.
Option 3
See the last comment from Adrian which seems to answer all the problems anyhow I didn't got the time to test it.
Thank you all for your contributions.