I was trying to deploy a NextJS application to Elastic Beanstalk via eb deploy. But the source bundle failed to unzip during deployment as the source bundle contained some pre-built .next page which the file name is in UTF-8 encoding. The error is stated as below.
2022/xx/xx xx:xx:xx.xxxxxx [INFO] Executing instruction: StageApplication
2022/xx/xx xx:xx:xx.xxxxxx [INFO] extracting /opt/elasticbeanstalk/deployment/app_source_bundle to /var/app/staging/
2022/01/31 04:56:44.300483 [INFO] Running command /bin/sh -c /usr/bin/unzip -q -o /opt/elasticbeanstalk/deployment/app_source_bundle -d /var/app/staging/
2022/01/31 04:56:45.932820 [ERROR] An error occurred during execution of command [app-deploy] - [StageApplication]. Stop running the command. Error: Command /bin/sh -c /usr/bin/unzip -q -o /opt/elasticbeanstalk/deployment/app_source_bundle -d /var/app/staging/ failed with error exit status 50. Stderr:error: cannot create /var/app/staging/.next/server/pages/\u6e2c\u8a66/\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66.html
File name too long
error: cannot create /var/app/staging/.next/server/pages/\u6e2c\u8a66/\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66\u6e2c\u8a66.json
File name too long
I was able to unzip the file with option -O UTF-8, is there any way I could add this flag to the eb deploy unzip process?
edit 1. I am working with the Platform 64bit Amazon Linux 2/5.4.9
Not sure if it is good practice, but I eventually added an ebextensions to overcome the original unzipping flow.
commands:
command backup original zip:
command: |
logger "backup zip" && cp /opt/elasticbeanstalk/deployment/app_source_bundle /tmp/app_source_bundle_bak &&
logger "rm existing zip .next folder" && zip -Ad /opt/elasticbeanstalk/deployment/app_source_bundle ".next/*"
cwd: /home/ec2-user
ignoreErrors: false
container_commands:
replace the original zip to staging:
command: |
logger "custom unzip" &&
unzip -O UTF-8 -q -o /tmp/app_source_bundle_bak -d /var/app/staging/
cwd: /home/ec2-user
ignoreErrors: false
Related
I'm using Github Actions to implement a CI pipeline in my project. Currently, I'm trying to use actions/cache#v2 to cache yarn cache dir to improve the pipeline time. Unfortunately, always that the actions/cache#v2 runs I'm getting an error in the post-job saying: /bin/tar: unrecognized option: posix. The complete log is:
Post job cleanup.
/usr/bin/docker exec 4decc52e7744d9ab2e81bb24c99a830acc848912515ef1e86fbb9b8d5049c9cf sh -c "cat /etc/*release | grep ^ID"
/bin/tar --posix -z -cf cache.tgz -P -C /__w/open-tuna-api/open-tuna-api --files-from manifest.txt
/bin/tar: unrecognized option: posix
BusyBox v1.31.1 () multi-call binary.
Usage: tar c|x|t [-ZzJjahmvokO] [-f TARFILE] [-C DIR] [-T FILE] [-X FILE] [--exclude PATTERN]... [FILE]...
Create, extract, or list files from a tar file
c Create
x Extract
t List
-f FILE Name of TARFILE ('-' for stdin/out)
-C DIR Change to DIR before operation
-v Verbose
-O Extract to stdout
-m Don't restore mtime
-o Don't restore user:group
-k Don't replace existing files
-Z (De)compress using compress
-z (De)compress using gzip
-J (De)compress using xz
-j (De)compress using bzip2
-a (De)compress using lzma
-h Follow symlinks
-T FILE File with names to include
-X FILE File with glob patterns to exclude
--exclude PATTERN Glob pattern to exclude
Warning: Tar failed with error: The process '/bin/tar' failed with exit code 1
I'm following the example of the official action cache repository. Here a snippet of my CI.yml
# Configure cache
- name: Get yarn cache directory path
id: yarn-cache-dir-path
run: echo "::set-output name=dir::$(yarn cache dir)"
- uses: actions/cache#v2
id: yarn-cache # use this to check for `cache-hit` (`steps.yarn-cache.outputs.cache-hit != 'true'`)
with:
path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
Because of the above error, the cache is not created and the pipeline time is not improved. I've tried changing the hasFiles expression and the entire key, but no success.
My question is: Am I making some mistake in the use of Action Cache? Can anyone help me with this issue? Thanks.
Your problem is that you're running inside an Alpine Linux-based container. Alpine Linux is designed for small size, and as a result it replaces many of the standard GNU utilities with those from busybox, a multi-call binary. Your version of tar is one of those.
The actions/cache#v2 action uses tar --posix, which tells tar to create a standard pax-format archive. pax archives are a form of tar archive that can handle arbitrarily long filenames, huge file sizes, and other types of metadata that tar archives cannot. This format is specified by POSIX and is a better choice than GNU tar-style archives because it works across a variety of systems and isn't specified by what one implementation does, in addition to being more featureful.
However, the version of tar shipped as part of busybox doesn't support the --posix option, and as a result this command fails. If you want to use the actions/cache#v2 GitHub Action, then you need to provide a version of GNU or BSD (libarchive) tar earlier in your PATH before running it so that that command can be used instead of busybox's.
I'm trying to get SpotBugs run on Scala project using the SpotBugs CLI.
I installed the CLI like this:
$ curl -L -o /tmp/spotbugs-4.0.3.tgz https://github.com/spotbugs/spotbugs/releases/download/4.0.3/spotbugs-4.0.3.tgz
$ gunzip -c /tmp/spotbugs-4.0.3.tgz | tar xvf - -C /tmp
Then I run it like this
$ time java -jar /tmp/spotbugs-4.0.3/lib/spotbugs.jar -textui -xml:withMessages -html -output target/scala-2.11/spotbugs-report.html vad/target/scala-2.11/projectx-SNAPSHOT-assembly.jar
^Cjava -jar /tmp/spotbugs-4.0.3/lib/spotbugs.jar -textui -xml:withMessages -htm 2462.79s user 135.67s system 130% cpu 33:16.98 total
You can notice that it took more than 30mn without even have finished, I had to halt it.
It obviously seems that SpotBugs is not running properly here, so what am I doing wrong here?
I am trying out OWASP/ZAP to see if it is something we can use for our project, but I cannot make it work I don't know what I am doing wrong and the documentation really does not help. What I am trying is to run a scan on my api running in a docker container locally on my windows machine so I run the command:
docker run -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-stable zap-baseline.py -t http://172.21.0.2:8080/swagger.json -g gen.conf -r testreport.html the ip 172.21.0.2 is the IPAddress of my api container even tried with localhost and 127.0.0.1
but it just hangs in the following log message:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 1:43:31 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Nothing happens and my zap docker container is in a unhealthy state, after some time it just crashes and ends up with a bunch of NullPointerExceptions. Is zap docker only working for linux, something specifically I need to do when running it on a windows machine? I don't get why this is not working even when I am following specifically the guideline in https://github.com/zaproxy/zaproxy/wiki/Docker
Edit 1
My latest try where I am trying to target my host ip address directly and the port that I am exposing my api to gives me the following error:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 2:12:07 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Total of 3 URLs
ERROR Permission denied
2019-02-14 14:12:57,116 I/O error(13): Permission denied
Traceback (most recent call last):
File "/zap/zap-baseline.py", line 347, in main
with open(base_dir + generate, 'w') as f:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
Found Java version 1.8.0_151
Available memory: 3928 MB
Setting jvm heap size: -Xmx982m
213 [main] INFO org.zaproxy.zap.DaemonBootstrap
When you run docker with: docker run -v $(pwd):/zap/wrk/:rw ...
you are mapping the /zap/wrk/ directory in the docker image to the current working directory (cwd) of the machine in which you are running docker.
I think the problem is that your current user doesn't have write access to the cwd.
Try below command, hope it resolves issue.
$docker run --user $(id -u):$(id -g) -v $(pwd):/zap/wrk/:rw --rm -t owasp/zap2docker-stable zap-baseline.py -t https://your_url -g gen.conf -r testreport.html
The key error here is:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
This means that the script cannot write to the gen.conf file that you have mounted on /zap/wrk
Do you have write access to the cwd when its not mounted?
The reason for that is, if you use -r parameter, zap will attempt to generate the file report.html at location /zap/wrk/. In order to make this work, we have to mount a directory to this location /zap/wrk.
But when you do so, it is important that the zap container is able to perform the write operations on the mounted directory.
So, below is the working solution using gitlab ci yml. I started with this approach of using image: owasp/zap2docker-stable however then had to go to the vanilla docker commands to execute it.
test_site:
stage: test
image: docker:latest
script:
# The folder zap-reports created locally will be mounted to owasp/zap2docker docker container,
# On execution it will generate the reports in this folder. Current user is passed so reports can be generated"
- mkdir zap-reports
- cd zap-reports
- docker pull owasp/zap2docker-stable:latest || echo
- docker run --name zap-container --rm -v $(pwd):/zap/wrk -u $(id -u ${USER}):$(id -g ${USER}) owasp/zap2docker-stable zap-baseline.py -t "https://example.com" -r report.html
artifacts:
when: always
paths:
- zap-reports
allow_failure: true
So the trick in the above code is
Mount local directory zap-reports to /zap/wrk as in $(pwd):/zap/wrk
Pass the current user and group on the host machine to the docker container so the process is using the same user / group. This allows writing of reports on the directory mounted from local host. This is done by -u $(id -u ${USER}):$(id -g ${USER})
Below is the working code with image: owasp/zap2docker-stable
test_site:
variables:
GIT_STRATEGY: none
stage: test
image:
name: owasp/zap2docker-stable:latest
before_script:
- mkdir -p /zap/wrk
script:
- zap-baseline.py -t "https://example.com" -g gen.conf -I -r testreport.html
- cp /zap/wrk/testreport.html testreport.html
artifacts:
when: always
paths:
- zap.out
- testreport.html
I have the below .sh code which need to get converted to Ansible tasks.
#!/bin/sh
echo "Installing Sonar"
SONAR_HOME=/tui/hybris/sonar
if [ ! -d "$SONAR_HOME" ]; then
mkdir -p $SONAR_HOME
fi
cd $SONAR_HOME
wget https://s3-eu-west-1.amazonaws.com/tuiuk/source/sonarqube/sonarqube-5.4.zip
unzip sonarqube-5.4.zip
echo "Modifying Sonar config file"
cd sonarqube-5.4/conf
perl -p -i -e 's/#sonar.jdbc.username=/sonar.jdbc.username=sonar/g' sonar.properties
perl -p -i -e 's/#sonar.jdbc.password=/sonar.jdbc.password=sonar/g' sonar.properties
perl -p -i -e 's/#sonar.jdbc.url=jdbc:mysql/sonar.jdbc.url=jdbc:mysql/g' sonar.properties
cd $SONAR_HOME
echo "downloading and copying plugins"
wget https://s3-eu-west-1.amazonaws.com/tuiuk/source/sonarqube/sonarqube5.4_plugins.zip
unzip sonarqube5.4_plugins.zip
cp plugins/* sonarqube-5.4/extensions/plugins/
cd sonarqube-5.4/bin/linux-x86-64
echo "Starting Sonar"
./sonar.sh start
Below is my task.I got stuck where I need to execute perl script. Could any of you help me in proceeding further.
- hosts: docker_test
tasks:
- name: Creates directory
file: path=/tui/hybris/sonar state=directory mode=0777
sudo: yes
- name: Installing Sonar
get_url:
url: "https://s3-eu-west-1.amazonaws.com/tuiuk/source/sonarqube/sonarqube-5.4.zip"
dest: "/tui/hybris/sonar/sonarqube-5.4.zip"
register: get_solr
- debug:
msg: "solr was downloaded"
when: get_solr|changed
- name: Unzip SonarQube
unarchive: src=/tui/hybris/sonar/sonarqube-5.4.zip dest=/tui/hybris/sonar copy=no
I bet you don't need perl here, use lineinfile with regex option (if you need to modify a single line in the file) or replace module (if you need to modify all occurrences).
Just call perl with command or shell-module:
- task: Modifying Sonar config file
shell: cd sonarqube-5.4/conf && perl -p -i -e ...
I am working on making an rpm for a small program used within our enterprise. The %build section of the rpm process works. I'm having trouble with the install section. I've referenced this article response and I believe I am properly referring to the target location with respect to %{_buildroot}.
The program I'm making is to be installed as a system service. So, after the rpm actually is generated for this step, I've got to add the next step in my installation process which is to include the script that is installed to the init.d location and run that install. One step at a time though.
The build errors are as follows (omitting everything but %install):
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.eUDaCK
+ umask 022
+ cd /home/packager/rpmbuild/BUILD
+ '[' /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64 '!=' / ']'
+ rm -rf /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64
++ dirname /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64
+ mkdir -p /home/packager/rpmbuild/BUILDROOT
+ mkdir /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64
+ cd o2arbitord-1.0
+ LANG=C
+ export LANG
+ unset DISPLAY
+ install -m 555 /home/packager/rpmbuild/BUILD/o2arbitord-1.0/o2arbitord /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64/usr/sbin
install: cannot create regular file `/home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64/usr/sbin': No such file or directory
error: Bad exit status from /var/tmp/rpm-tmp.eUDaCK (%install)
Now, my rpmbuild directory does not have the directory /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64/usr/sbin. While I know that's part of the problem, the rpmbuild process isn't making the directory /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64 either. What I don't understand about that one is: why? Looking at the script output above you can clearly see the line: mkdir /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64. So, why isn't the directory made?
How does the line BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX) from whatever the definition of %{_buildroot} is? I thought that was the definition, but it appears to be something different.
For reference, my spec file
Name: o2arbitord
Version: 1.0
Release: 1%{?dist}
Summary: a daemon
Group: Applications/System
License: GPL
URL: http://My.site
Source0: %{name}-%{version}.tar.gz
BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX)
BuildArch: x86_64
BuildRequires: libusb1-devel
#Requires:
%description
%prep
%setup -q
%build
make -f o2arbitord.mk
%install
install -m 555 %{_builddir}/%{name}-%{version}/%{name} %{buildroot}%{_sbindir}
%clean
rm -rf %{buildroot}
%files
%defattr(-,root,root,-)
/usr/sbin/o2arbitord
%changelog
You are attempting to install a file into a directory that doesn't exist (yet).
RPM only creates the %{buildroot} for you automatically. Anything under that you need to create yourself.
So when you run
install -m 555 %{_builddir}/%{name}-%{version}/%{name} %{buildroot}%{_sbindir}
where %{buildroot}%{_sbindir} expands to /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64/usr/sbin RPM has only created /home/packager/rpmbuild/BUILDROOT/o2arbitord-1.0-1.el6.x86_64 for you already.
You need to create the /usr/sbin part of that path and then copy the file into it.
You can do that with either
%{__mkdir_p} '%{buildroot}%{_sbindir}'
or
%{__install} -d '%{buildroot}%{_sbindir}'
Where
$ rpm -E '__mkdir_p = %{__mkdir_p}'
__mkdir_p = /bin/mkdir -p
$ rpm -E '__install = %{__install}'
__install = /usr/bin/install