I have two SAM applications, both of which have a set of Python Lambdas in common. I have a different template file for each application.
When I run sam deploy for the first one it correctly deploys the Lambdas with their dependencies. For the second it only deploys the application code.
I can see all the dependencies correctly there in the .aws-sam/build directory.
Using --debug doesn't give me any useful information.
How do I go about debugging this?
Should the zip files that it deploys be available somewhere on my local system, and if so where?
Related
We have a service mesh/kubernetes working via the terminal, showing all the different pods with their different name spaces. Inside of each pod, you can console in and see the app.jar.
Recently, boss/client asked how we can run the various SYSTEM INTEGRATION tests for any particular JAR from the service mesh/kubernetes command line. Google says to use 'mvn clean install', 'javac' or 'java -jar junit-platform-console-standalone-1.7.2.jar --class-path target --select-class '. These all fail for various reasons (mvn not present, javac not present, jar says that port is in use. Of course the port is in use, the same aforementioned jar is using it).
When I look at a pod in Gitlab (or Intellij) I see all the tests it has. But how I can run these SYSTEM INTEGRATION tests from the pod console? Ideally a command to run all tests, that would make things a lot easier.
edit:
lol at the heat in the comments. I clarified with the boss, she said that we want to run system integration tests from the service mesh, not unit tests. These pods are not isolated, some of them depend on each other.
Generally the comment from the user jonrsharpe could be an answer to the question:
That makes no sense as a request - you run the unit tests on the source code, then build and deploy the container if they pass. They shouldn't even be included in what's in the deployed jar.
If you need to test an application, do so before deploying it. You should have a separate environment where you will test your application, and only use Kubernetes when the application is working properly. You can of course use some CI type solution. Look at this page - Running JUnit tests with GitLab CI for Kubernetes-hosted apps.
EDIT
If you are looking for a solution to make integration testing with Kubernetes you can read a couple of docs. It all depends on what specifically you want to test. I present several possibilities:
Overcome Kubernetes Application Integration Testing Challenges with Telepresence
How we approached integration testing in Kubernetes, and why we stopped using Helm tests
Testing Kubernetes deployments within CI Pipelines
I have been trying to deploy Talend Agent as app in PCF, I literally have no idea about Talend. However for PCF guy, its an java jar file for me what i got from DATA team.
I am getting no buildpack supported error. I also tried passing java buildpack by command but failed again with incompatible buildpack.
Error: No container can run this application. Please ensure that you've pushed a valid JVM artifact or artifacts using the -p command line argument or path manifest entry. Information about valid JVM artifacts can be found at https://github.com/cloudfoundry/java-buildpack#additional-documentation.
Failed to compile droplet: Failed to run finalize script: exit status 1
I was expecting this to be deployed as an App which i can access.
Do we have any one who can help me with this?
The CF Java buildpack expects a Java jar file to have certain characteristics in order for it to know how to execute the code in the jar file. The most common characteristics are a self-executable Spring Boot app, an app containing a Main class, and an app containing Tomcat.
I don't know anything about the Talend Agent, but a typical Java agent jar file is not meant to be executed as a stand-alone app. An agent is meant to be installed in the JVM used to run an app, in order to instrument the JVM and/or the app. An typical agent jar file won't have any of the execution entry points recognized by the CF Java buildpack, and therefore the buildpack will reject it with an error message similar to the one you show.
The CF Java buildpack does understand how to install several specific agents (listed under Standard Frameworks in the buildpack docs) into the JVM when an app is deployed. The Talend Agent is not currently in this list. If it is in fact a typical Java agent jar file, you would have to modify the Java buildpack to add support for it.
Trying to build the optashift-employee-rostering project
I followed the instructions in the readme file to build the app from this repo, but it fails every time. When I try it locally on my Windows 10 with Docker and the "oc.exe" tool, it simply hangs and the oc fails to open even the openshift local console (the one on localhost).
I've created an account on openshift (Starter: US East (Virginia) For individual learning and experimenting.)
When I do it in openshift online, it fails the build but doesn't tell me the reason.
Here are the logs:
http://pasted.co/47eed571
How can I deploy this app to openshift or to some other cloud (Google Cloud/Microsoft Azure etc.)
OpenShift Online (= free edition) has less than 1GB of RAM for the build pods, which isn't enough to build it (GWT compilations needs more). That leads to error code 137 during GWT compilation.
But OpenShift Online is enough to run it.
Workaround: Build the war locally with ./provision.sh deploy employee-rostering --binary
as explained here in the readme. I hope that GWT and OpenShift play along better in the future.
Over the last few months I've become familiar with the AWS OpsWorks deployment process as it pertain to Node.js - deployment for Go seems to be another animal.
From what I've gathered, this is what I need to compile a successful Go deployment:
Install go on the EC2 box
Pull the private repository from GitHub
Pull in all dependencies
Compile the main package for the box's arch
Start the binary with a couple of flags that I use
Everywhere I have read seems to tout the ease of Go deployments because dependencies are included in the binary, but that seems to imply that you are compiling the application in your development environment and pushing that up to the cloud. This doesn't seem like a process that works well across a development team.
https://github.com/crowdmob/chef-golang-web-server-cookbook
I have been attempting to get the Chef Scripts from CrowdMob working, but to no avail. I continue to get errors that look like this:
[2014-08-01T16:08:22+00:00] WARN: Cookbook 'templates' is empty or entirely chefignored at /opt/aws/opsworks/current/merged-cookbooks/templates
What is the proper way to deal with dependencies during deployment?
Are there any established practices for deploying Go onto AWS with Chef?
Use a continuous integration service like CircleCi, Travis or your own setup Jenkins.
On the Continuous integration service then
Add a github post commit hook .
Test / Build the binary
Create the zip file as artifact
At this point you can create an new version on Elastic Beanstalk using the AWS commandline and the zip file created from this version.
venv/bin/aws elasticbeanstalk create-application-version ...
Then just select which version to deploy from the EB dashboard.
For simple services using Chef is overkill IMHO. Docker offers a simple workflow.
Use the Docker container option and then use elastic beanstalk's command-line client to initialize your environment in the project root directory and then you can simply do a 'git aws.push' from the same place.
With the correctly configured Dockerfile in your project and pushed to eb, the EBS' docker container app will pull the correct image with golang installed, then do a go get on your projects dependencies, and then compile and run your app. It sounds way more complicated but it's actually very easy.
Below is a link to a video walkthrough I did for running a simple golang webapp on EBS. The method for uploading the project does not use git. Instead, I zip it up and upload it, but the git method is recommended (and I do it) for automating deployment.
YouTube: How to run a go web app on Amazon's Elastic Beanstalk
I also had some problems to setup a good building process with Elastic Beanstalk and Go.. I don't want to use Docker, and all the people seems to be going on this direction.. so.. you can take a look at this project: https://github.com/battle-arena/heimdall
There you will find a custom setup using the Buildfile and the Procfile.. and I rely on a CI system to build the release package...
Basically I do the following:
Hook the commits to a CI system
On the CI system I run the test and the install.sh if all good
The install.sh will create a build folder and a structure that will be sended to the Elastic Beanstalk with the aws-cli tool
After send to the EB the Buildfile will run the build.sh that will basically extract the compressed package with the proper structure, and run a go get ./... and go build
The Procfile will run the generated binary
I think the result is pretty good, and you can use with any CI tool.
How do you handle the deployment of a LightSwitch application into a production environment?
i.e. the LS application has been developed, but it now needs to be installed first into Test, and then into Live.
We don't want to use the "manual" approach, i.e. use the Visual Studio Build / Publish option, rather we want to automate the deployment.
My feeling is that deployment is one of the real weak points of LightSwitch. If you are using the very simple deployment model that is build into the product, and you're doing everything within a Windows domain, the publishing wizard can do everything. But if you're deviating from the model at all LightSwitch will fight you. I'd really like to see an "advanced" deployment option that provided some configurability.
Here's how I solved the problem you're having with LightSwitch applications that are targeting web deployment:
At the beginning of the project, deploy once to each target environment using the publish wizard. This is the easiest way to get the database set up.
As new builds are deployed, use the publish wizard to deploy to a deployment package to a standard location on the local development machine.
The deployment package is just a zip file, so you can open it an drill down to where the actual binary release is. I use a powershell script to copy the binary files out of the the deployment package and in to a local SVN working directory. Note that you must not copy web.config file during this step.
Check the unpacked binary files into SVN and use SVN to manage the deployment.
Manage schema changes with SQL scripts.