Dotcloud Pseudo-terminal will not be allocated because stdin is not a terminal - ramen - dotcloud

I recently tried to push one of the php examples on dotcloud and I got the error below. I am not sure how to get it to allocate the pseudo-terminal...
dotcloud push ramen ramen-on-dotcloud
# upload ramen-on-dotcloud ssh://dotcloud#uploader.dotcloud.com:443/ramen
# rsync
Pseudo-terminal will not be allocated because stdin is not a terminal.
building file list ... done
sent 141 bytes received 20 bytes 35.78 bytes/sec
total size is 55 speedup is 0.34
22:34:16 ---> Deploy of "ramen" scheduled for revision rsync-1339454056336 at 2012-06-11
22:34:16
22:34:17 ---> Building the application...
22:34:17 [www] Build started for revision rsync-1339454056336 (clean build)
22:34:18 [www] I am snapshotsworker_00/bob-2, and I will be your builder today.
22:34:21 [www] Build completed successfully. Compiled image size is 427KB
22:34:21 ---> Application build is done
22:34:21 ---> Initializing new services... (This may take a few minutes)
22:34:21 ---> Using default scaling for service www (1 instance(s)).
22:34:21 ---> No new services found
22:34:21 ---> All services have been initialized. Deploying code...
22:34:21 [www.0] Deploying build revision rsync-1339454056336...
22:34:25 [www.0] Running postinstall script...
22:34:27 [www.0] Launching...
22:34:28 [www.0] Waiting for the instance to become responsive...
22:34:28 [www.0] Re-routing traffic to the new build...
22:34:29 [www.0] Successfully deployed build revision rsync-1339454056336
22:34:29 ---> Deploy finished
22:34:29 ---> Application fully deployed
Deployment finished. Your application is available at the following URLs
www: http://ramen-l.dotcloud.com/

The pseudo-terminal message is informative. It does not indicate an error in this case.
Note that at the bottim it indicates
22:34:29 ---> Deploy finished
22:34:29 ---> Application fully deployed
Deployment finished. Your application is available at the following
URLs www: http://ramen-l.dotcloud.com/
so everything went fine.
/A

Related

AWS Elastic Beanstalk failed to install psycopg2 using requirements.txt Git Pip

I am trying to deploy an app using elasticbeanstalk with Python 3.8. I am using the following requirements.txt
click==8.0.1
Flask==1.1.2
Flask-SQLAlchemy==2.5.1
greenlet==1.1.0
itsdangerous==2.0.1
Jinja2==3.0.1
MarkupSafe==2.0.1
marshmallow==3.12.1
marshmallow-sqlalchemy==0.25.0
SQLAlchemy==1.4.15
Werkzeug==2.0.1
celery[redis]
psycopg2==2.9.3
Flask-JWT-Extended==4.3.1
Flask-RESTful==0.3.9
python-decouple==3.6
When I run the command eb create, I get the following error
2022-04-05 22:03:00 INFO Created security group named: sg-00b14485064e5e8ca
2022-04-05 22:03:16 INFO Created security group named: awseb-e-ekd3bw2bvf-stack-AWSEBSecurityGroup-1O3NAVBIRRK30
2022-04-05 22:03:31 INFO Created Auto Scaling launch configuration named: awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingLaunchConfiguration-HKjIVsa84E3U
2022-04-05 22:04:49 INFO Created Auto Scaling group named: awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingGroup-5FQOAWMGCR3W
2022-04-05 22:04:49 INFO Waiting for EC2 instances to launch. This may take a few minutes.
2022-04-05 22:04:49 INFO Created Auto Scaling group policy named: arn:aws:autoscaling:us-east-1:208357543212:scalingPolicy:ecfbbff0-4151-492f-a474-ba01535ad348:autoScalingGroupName/awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingGroup-5FQOAWMGCR3W:policyName/awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingScaleDownPolicy-CI2UIP6X023P
2022-04-05 22:04:49 INFO Created Auto Scaling group policy named: arn:aws:autoscaling:us-east-1:208357543212:scalingPolicy:d534189a-45e3-48f1-a206-720f202b4469:autoScalingGroupName/awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingGroup-5FQOAWMGCR3W:policyName/awseb-e-ekd3bw2bvf-stack-AWSEBAutoScalingScaleUpPolicy-1F0WVTUXXPFKF
2022-04-05 22:05:04 INFO Created CloudWatch alarm named: awseb-e-ekd3bw2bvf-stack-AWSEBCloudwatchAlarmLow-W8URMJEYBO3C
2022-04-05 22:05:04 INFO Created CloudWatch alarm named: awseb-e-ekd3bw2bvf-stack-AWSEBCloudwatchAlarmHigh-13J8QHI51MEBM
2022-04-05 22:06:09 INFO Created load balancer named: arn:aws:elasticloadbalancing:us-east-1:208357543212:loadbalancer/app/awseb-AWSEB-IXOR2Z0K0OJV/1fba4c6ff6122c55
2022-04-05 22:06:24 INFO Created Load Balancer listener named: arn:aws:elasticloadbalancing:us-east-1:208357543212:listener/app/awseb-AWSEB-IXOR2Z0K0OJV/1fba4c6ff6122c55/734b0cf960b6b8c4
2022-04-05 22:06:42 ERROR Instance deployment failed to install application dependencies. The deployment failed.
2022-04-05 22:06:42 ERROR Instance deployment failed. For details, see 'eb-engine.log'.
2022-04-05 22:06:44 ERROR [Instance: i-0368a7ba2157241f4] Command failed on instance. Return code: 1 Output: Engine execution has encountered an error..
2022-04-05 22:06:45 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2022-04-05 22:07:48 ERROR Create environment operation is complete, but with errors. For more information, see troubleshooting documentation.
I look at the corresponding logs and I get the following error:
Collecting Werkzeug==2.0.1
Downloading Werkzeug-2.0.1-py3-none-any.whl (288 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 288.2/288.2 KB 35.6 MB/s eta 0:00:00
Collecting celery[redis]
Downloading celery-5.2.6-py3-none-any.whl (405 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 405.6/405.6 KB 54.7 MB/s eta 0:00:00
Collecting psycopg2==2.9.3
Downloading psycopg2-2.9.3.tar.gz (380 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 380.6/380.6 KB 52.2 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
2022/04/05 22:06:42.952376 [INFO] error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [23 lines of output]
running egg_info
creating /tmp/pip-pip-egg-info-v0aygozt/psycopg2.egg-info
writing /tmp/pip-pip-egg-info-v0aygozt/psycopg2.egg-info/PKG-INFO
writing dependency_links to /tmp/pip-pip-egg-info-v0aygozt/psycopg2.egg-info/dependency_links.txt
writing top-level names to /tmp/pip-pip-egg-info-v0aygozt/psycopg2.egg-info/top_level.txt
writing manifest file '/tmp/pip-pip-egg-info-v0aygozt/psycopg2.egg-info/SOURCES.txt'
Error: pg_config executable not found.
pg_config is required to build psycopg2 from source. Please add the directory
containing pg_config to the $PATH or specify the full executable path with the
option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
I am not quite familiar with the requirements of AWS, but I could run the app locally and without any problem. I just wonder what would be a right configuration for the requirements.txt file in order to avoid the bug.
Thanks in advance.
You have to install postgresql-devel first before you can use psycopg2. You can add the installation instructions to your ebextentions:
packages:
yum:
postgresql-devel: []
or
commands:
command1:
command: yum install -y postgresql-devel
I could solve the error. I have to change psycopg2 by psycopg2-binary as it was suggested by the AWS logs:
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
This issue has to be with the particular configuration of the libraries and the specific Linux machines used in AWS.

Service Fabric local machine deployment fails with unclear error

When trying to debug Service Fabric locally it fails during deployment:
1>------ Build started: Project: Project.TestServer.Contracts, Configuration: Debug Any CPU ------
1>Project.TestServer.Contracts -> D:\Projects\Project.Test\Project.TestServer.Contracts\bin\Debug\netstandard2.1\Project.TestServer.Contracts.dll
2>------ Build started: Project: Project.TestServer, Configuration: Debug Any CPU ------
2>Waiting for output folder cleanup...
2>Output folder cleanup has been completed.
2>Project.TestServer -> D:\Projects\Project.Test\Project.TestServer\bin\Debug\netcoreapp3.1\win7-x64\Project.TestServer.dll
2>Project.TestServer -> D:\Projects\Project.Test\Project.TestServer\bin\Debug\netcoreapp3.1\win7-x64\Project.TestServer.Views.dll
3>------ Build started: Project: Project.TestServer.ServiceFabric, Configuration: Debug x64 ------
4>------ Deploy started: Project: Project.TestServer.ServiceFabric, Configuration: Debug x64 ------
4>C:\ProgramData\Microsoft\Crypto\Keys\33c99d3358d005d142e356b6d*******_8f15e82c-1deb-4d62-b94a-196c3a******
========== Build: 3 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
========== Deploy: 0 succeeded, 1 failed, 0 skipped ==========
What this line could mean?
C:\ProgramData\Microsoft\Crypto\Keys\33c99d3358d005d142e356b6d*******_8f15e82c-1deb-4d62-b94a-196c3a******
I had this same issue for the past day or so, and I was able to resolve the issue by searching my OS(C://) drive for the first part of the key name {first part}_{the rest}.
I found a copy/original key in "C:\Users\youruser\AppData\Roaming\Microsoft\Crypto\Keys" and copied it over to "C:\ProgramData\Microsoft\Crypto\Keys".
After doing this the app was able to run and deploy again on my local machine.
This solution by ravipal worked for me:
The issue is that the ASP.NET development certificate being imported to Local Computer was incomplete. We are working on addressing this issue in the VS Tooling. Meanwhile, please use the following workaround which is needed only once per machine.
Export the asp net development certificate
dotnet dev-certs https -ep "%TEMP%\aspcert.pfx" -p <password> (choose any password)
Launch local machine certificate manager
Import the certificate that was exported in step1 (%TEMP%\aspcert.pfx) to both 'Personal' and 'Trusted Root Certification Authorities' of Local Computer. Please use all the default options while importing the certificate.
Now the deployment of the SF application will work.

Gitlab Runner auto CI stuck at downloading

Until now I worked a lot with github/bitbucket and jenkins/bamboo. Right now I'm trying to setup a Gitlab CE server with a private kubernetes cluster.
I want to run a hello world project in java with gitlabs AutoDevOps in kubernetes, this is the repo I'm using:
https://github.com/dstar55/docker-hello-world-spring-boot
Everything works fine until runner gets created in kubernetes, downloads the image but gets stuck on downloading maven resources.
Running on runner-h6cwaztm-project-8-concurrent-0jvd9f via runner-gitlab-runner-6dcf7dd458-jl69h...
Fetching changes with git depth set to 50...
00:02
Initialized empty Git repository in /builds/.../hello-world-spring/.git/
Created fresh repository.
From https://.../hello-world-spring
* [new ref] refs/pipelines/14 -> refs/pipelines/14
* [new branch] master -> origin/master
Checking out ad24ac6b as master...
Skipping Git submodules setup
$ if [[ -z "$CI_COMMIT_TAG" ]]; then # collapsed multi-line command
$ /build/build.sh
Logging to GitLab Container Registry with CI credentials...
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Building Dockerfile-based application...
Step 1/10 : FROM maven:3.5.2-jdk-8-alpine AS maven_build
3.5.2-jdk-8-alpine: Pulling from library/maven
22bc7fb81913: Pull complete
Digest: sha256:7cebda60f8a541e1bf2330306d22f9786f989187f4ec96539d398a0d4dbfdadb
Status: Downloaded newer image for maven:3.5.2-jdk-8-alpine
---> 293423a981a7
Step 2/10 : COPY pom.xml /tmp/
---> c0e609a509a8
Step 3/10 : COPY src /tmp/src/
---> e735a08f2b39
Step 4/10 : WORKDIR /tmp/
---> Running in 90620c0ca3ad
Removing intermediate container 90620c0ca3ad
---> a5d9fdc62aa9
Step 5/10 : RUN mvn package
---> Running in dc90f43fc83b
[INFO] Scanning for projects...
Downloading from central: https://repo.maven.apache.org/maven2/org/springframework/boot/spring-boot-starter-parent/2.2.1.RELEASE/spring-boot-starter-parent-2.2.1.RELEASE.pom
It never throws an error (until it timesout) and it never goes past this point.
Kubernetes has 4 nodes 1 master and 3 slaves, using flannel and MetalLB
Edit:
I added a curl command instead of mvn package and it seems the download speed is 0, how is that possible?
Step 5/11 : RUN curl https://repo.maven.apache.org/maven2/org/springframework/boot/spring-boot-starter-parent/2.2.1.RELEASE/spring-boot-starter-parent-2.2.1.RELEASE.pom --output test.pom
---> Running in db2bc24c6a4f
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:05:00 --:--:-- 0
curl: (28) Operation timed out after 300689 milliseconds with 0 out of 0 bytes received
The command '/bin/sh -c curl https://repo.maven.apache.org/maven2/org/springframework/boot/spring-boot-starter-parent/2.2.1.RELEASE/spring-boot-starter-parent-2.2.1.RELEASE.pom --output test.pom' returned a non-zero code: 28
ERROR: Job failed: command terminated with exit code 1
According to place where CI hangs, your pipeline stuck at mvn package:
Step 5/10 : RUN mvn package
---> Running in dc90f43fc83b
[INFO] Scanning for projects...
Downloading from central: https://repo.maven.apache.org/maven2/org/springframework/boot/spring-boot-starter-parent/2.2.1.RELEASE/spring-boot-starter-parent-2.2.1.RELEASE.pom
So, you can try to restart Artifactory.
Also, you can debug mvn packages with mvn clean package -X -e
See: this answer :
java - Maven hanging indefinitely while checking for updates - Stack Overflow
mvn clean package -X -e

Cloud Code for VisualStudio Code Errors on Cloud Code: Deploy

I've been trying to setup Cloud Code with VSCode and I've been running in to problems when starting the deploy process with Cloud Code: Deploy.
I've tried deploying the samples, python-hello-world-1 as well as the go-hello-world-1, to my kubernetes cluster on GKE but always end up getting errors when the deploy process starts package downloading:
Go Output
Running: skaffold run --enable-rpc -v info --rpc-http-port 49869 --filename skaffold.yaml --default-repo gcr.io/abx-lernende
starting gRPC server on port 50051
starting gRPC HTTP server on port 49869
Using kubectl context: gke_abx-lernende_europe-west4-a_joshu-test-cluster
Generating tags...
- go-hello-world -> gcr.io/abx-lernende/go-hello-world:latest
Checking cache...
- go-hello-world: Not found. Building
Building [go-hello-world]...
Sending build context to Docker daemon 57.86kB
Step 1/8 : FROM golang:1.13
---> 6586e3d10e96
Step 2/8 : RUN go get -u -v github.com/go-delve/delve/cmd/dlv
---> Running in b75ce8e5dae9
[91mgithub.com/go-delve/delve (download)
[0m[91m# cd .; git clone -- https://github.com/go-delve/delve /go/src/github.com/go-delve/delve
Cloning into '/go/src/github.com/go-delve/delve'...
fatal: unable to access 'https://github.com/go-delve/delve/': Failed to connect to github.com port 443: Connection refused
package github.com/go-delve/delve/cmd/dlv: exit status 128
[0mfailed to build: build failed: building [go-hello-world]: build artifact: unable to stream build output: The command '/bin/sh -c go get -u -v github.com/go-delve/delve/cmd/dlv' returned a non-zero code: 1
Exited with code 1.
Python Output
Running: skaffold run --enable-rpc -v info --rpc-http-port 50185 --filename
skaffold.yaml --default-repo gcr.io/abx-lernende
starting gRPC server on port 50051
starting gRPC HTTP server on port 50185
Skaffold &{Version:v1.3.1 ConfigVersion:skaffold/v2alpha3 GitVersion: GitCommit:6ba887a42438d1da578a005cf550e618fee6dfb8 GitTreeState:clean BuildDate:2020-01-31T19:55:18Z GoVersion:go1.13.4 Compiler:gc Platform:windows/amd64}
Using kubectl context: gke_abx-lernende_europe-west4-a_joshu-test-cluster
Generating tags...
- python-hello-world -> Tags generated in 0s
gcr.io/abx-lernende/python-hello-world:latest
Checking cache...
- python-hello-world: Cache check complete in 6.0001ms
Not found. Building
Building [python-hello-world]...
Sending build context to Docker daemon 4.608kB
Step 1/7 : FROM python:3.8
---> efdecc2e377a
Step 2/7 : WORKDIR /app
---> Using cache
---> a131b81cad66
Step 3/7 : COPY requirements.txt .
---> Using cache
---> 4625ef1862bd
Step 4/7 : RUN pip install --trusted-host pypi.python.org -r requirements.txt
---> Running in 4da23a158ae3
[91mWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f17ba9c9d60>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/flask/
Im assuming this is due to me being behind a corporate proxy. As counter measures I have explicitly configured VSCode, Git, pip, go and google cloud sdk all to use said proxy. On top of that I set the Windows ENV variables for the proxy. sadly without success.
Thanks!
You can configure docker to pass through proxy information into the containers by adding something like the following to your ~/.docker/config.json:
{
"proxies": {
"default": {
"httpProxy": "http://192.168.1.12:3128",
"httpsProxy": "http://192.168.1.12:3128"
}
}
}
Docker will set the HTTP_PROXY/HTTPS_PROXY environment variables within the container which is picked up by many tools.

Application keep crashing when deploying to Bluemix

We are trying to deploy application to Bluemix using the CF command cf push -f manifest.yml and below is the attached yml file
applications:
- name: service_name_v1.0
memory: 1GB
buildpack: liberty-for-java
instances: 1
path: /Users/admin/Apps/wlp-cb/usr/servers/defaultServer/defaultServer.zip
domains:
- my.bluemix.org
hosts:
- my-service-space
timeout: 180
DEA/2
Instance (index 0) failed to start accepting connections
May 4, 2017 10:10:55 AM
API/0
App instance exited with guid ae937fbe-d88f-48d5-b985-12b3b71b3b8c payload: {“cc_partition”=>”default”, “droplet”=>”ae937fbe-d88f-48d5-b985-12b3b71b3b8c”, “version”=>”94e28b95-27d3-4554-b99b-11d453f40e7f”, “instance”=>”9b723df2d462441caa352e3337b4e230”, “index”=>0, “reason”=>”CRASHED”, “exit_status”=>0, “exit_description”=>”failed to accept connections within health check timeout”, “crash_timestamp”=>1493917855}
May 4, 2017 10:10:55 AM
API/1
App instance exited with guid ae937fbe-d88f-48d5-b985-12b3b71b3b8c payload: {“cc_partition”=>”default”, “droplet”=>”ae937fbe-d88f-48d5-b985-12b3b71b3b8c”, “version”=>”94e28b95-27d3-4554-b99b-11d453f40e7f”, “instance”=>”9b723df2d462441caa352e3337b4e230”, “index”=>0, “reason”=>”CRASHED”, “exit_status”=>0, “exit_description”=>”failed to accept connections within health check timeout”, “crash_timestamp”=>1493917855}
This usually indicates that your server isn't listening on the right port.
I recommend deploying the Liberty Buildpack here and subsequently downloading the source code. This should give you a working Liberty app with the correct manifest to push code to Bluemix.
You list the path to the target artifact as a zip file and the build pack probably does not recognize that artifact type. Try changing it to an executable jar or a war or ear file.