We have a service mesh/kubernetes working via the terminal, showing all the different pods with their different name spaces. Inside of each pod, you can console in and see the app.jar.
Recently, boss/client asked how we can run the various SYSTEM INTEGRATION tests for any particular JAR from the service mesh/kubernetes command line. Google says to use 'mvn clean install', 'javac' or 'java -jar junit-platform-console-standalone-1.7.2.jar --class-path target --select-class '. These all fail for various reasons (mvn not present, javac not present, jar says that port is in use. Of course the port is in use, the same aforementioned jar is using it).
When I look at a pod in Gitlab (or Intellij) I see all the tests it has. But how I can run these SYSTEM INTEGRATION tests from the pod console? Ideally a command to run all tests, that would make things a lot easier.
edit:
lol at the heat in the comments. I clarified with the boss, she said that we want to run system integration tests from the service mesh, not unit tests. These pods are not isolated, some of them depend on each other.
Generally the comment from the user jonrsharpe could be an answer to the question:
That makes no sense as a request - you run the unit tests on the source code, then build and deploy the container if they pass. They shouldn't even be included in what's in the deployed jar.
If you need to test an application, do so before deploying it. You should have a separate environment where you will test your application, and only use Kubernetes when the application is working properly. You can of course use some CI type solution. Look at this page - Running JUnit tests with GitLab CI for Kubernetes-hosted apps.
EDIT
If you are looking for a solution to make integration testing with Kubernetes you can read a couple of docs. It all depends on what specifically you want to test. I present several possibilities:
Overcome Kubernetes Application Integration Testing Challenges with Telepresence
How we approached integration testing in Kubernetes, and why we stopped using Helm tests
Testing Kubernetes deployments within CI Pipelines
Related
I have two SAM applications, both of which have a set of Python Lambdas in common. I have a different template file for each application.
When I run sam deploy for the first one it correctly deploys the Lambdas with their dependencies. For the second it only deploys the application code.
I can see all the dependencies correctly there in the .aws-sam/build directory.
Using --debug doesn't give me any useful information.
How do I go about debugging this?
Should the zip files that it deploys be available somewhere on my local system, and if so where?
Currently, I'm creating a project that incorporates the MEAN stack, Docker, and Travis CI. I'm using Travis CI to automate builds for unit testing, integration testing, etc. I'm using Docker to help create a test environment. I've already successfully created unit tests thanks to resources via Medium. However, I haven't found many resources on writing integration tests for a MEAN application. I want to create tests to see if I get expected values in the Angular application when it connects to the REST API endpoints from Express, and the Express application is connected to a MongoDB server. Does anyone have any resources or advice on how to write these tests, and to execute them in a Dockerized test environment?
Having done something similar myself, just a piece of advice.
Test the services independently, like e2e tests for the api server, mail service for the frontend web app. If the selenium tests run alright with the webpage/app, and the api end point is on the local machine then everything looks to be working. There is nothing magic in docker. Your local configs should reflect what you're trying to test, and avoid overcomplicating things and write the testing yourself.
Tools often take more time to learn than the actual thing you're trying to acomplish if you do it yourself. Document it adequatly so the consumer of the container can replicate with minimal effort.
It's actually pretty hard, good luck.
My project at work used Urban Code Deploy (UCD) for its continuous deployment process. My code runs locally and passes all unit tests, but the build group says that my code is failing the FVT test being run by UCD. Is there any way to run this FVT test locally, or at least attempt to run it, so I can hopefully figure out what is failing?
Mike
UC Deploy isn't a testing tool. So the team that has set it up, has it running some other testing tool at the end of hte deployment (which is pretty normal).
So you'll need to ask them what testing tool they're using and go from there.
If you can see how the build group is deploying your code, you should be able to see what testing they are doing, and then be able to replicate that in your own environment. Often the code changes and changes in requirements will not be reflected in the FVT tests, and you need to deliver updated FVT test scripts in conjunction with your code changes.
Over the last few months I've become familiar with the AWS OpsWorks deployment process as it pertain to Node.js - deployment for Go seems to be another animal.
From what I've gathered, this is what I need to compile a successful Go deployment:
Install go on the EC2 box
Pull the private repository from GitHub
Pull in all dependencies
Compile the main package for the box's arch
Start the binary with a couple of flags that I use
Everywhere I have read seems to tout the ease of Go deployments because dependencies are included in the binary, but that seems to imply that you are compiling the application in your development environment and pushing that up to the cloud. This doesn't seem like a process that works well across a development team.
https://github.com/crowdmob/chef-golang-web-server-cookbook
I have been attempting to get the Chef Scripts from CrowdMob working, but to no avail. I continue to get errors that look like this:
[2014-08-01T16:08:22+00:00] WARN: Cookbook 'templates' is empty or entirely chefignored at /opt/aws/opsworks/current/merged-cookbooks/templates
What is the proper way to deal with dependencies during deployment?
Are there any established practices for deploying Go onto AWS with Chef?
Use a continuous integration service like CircleCi, Travis or your own setup Jenkins.
On the Continuous integration service then
Add a github post commit hook .
Test / Build the binary
Create the zip file as artifact
At this point you can create an new version on Elastic Beanstalk using the AWS commandline and the zip file created from this version.
venv/bin/aws elasticbeanstalk create-application-version ...
Then just select which version to deploy from the EB dashboard.
For simple services using Chef is overkill IMHO. Docker offers a simple workflow.
Use the Docker container option and then use elastic beanstalk's command-line client to initialize your environment in the project root directory and then you can simply do a 'git aws.push' from the same place.
With the correctly configured Dockerfile in your project and pushed to eb, the EBS' docker container app will pull the correct image with golang installed, then do a go get on your projects dependencies, and then compile and run your app. It sounds way more complicated but it's actually very easy.
Below is a link to a video walkthrough I did for running a simple golang webapp on EBS. The method for uploading the project does not use git. Instead, I zip it up and upload it, but the git method is recommended (and I do it) for automating deployment.
YouTube: How to run a go web app on Amazon's Elastic Beanstalk
I also had some problems to setup a good building process with Elastic Beanstalk and Go.. I don't want to use Docker, and all the people seems to be going on this direction.. so.. you can take a look at this project: https://github.com/battle-arena/heimdall
There you will find a custom setup using the Buildfile and the Procfile.. and I rely on a CI system to build the release package...
Basically I do the following:
Hook the commits to a CI system
On the CI system I run the test and the install.sh if all good
The install.sh will create a build folder and a structure that will be sended to the Elastic Beanstalk with the aws-cli tool
After send to the EB the Buildfile will run the build.sh that will basically extract the compressed package with the proper structure, and run a go get ./... and go build
The Procfile will run the generated binary
I think the result is pretty good, and you can use with any CI tool.
We are currently working on a Salesforce.com custom APEX project that involves a lot of apex classes, triggers and Visualforce pages. We also have numerous applications from AppExchange that are part of the system.
We develop all the Apex Classes, Visualforce pages, etc in test environment and then deploy it to the live environment using Eclipse IDE. What happens is that every time we deploy changes to the live environment, all the test methods of all the classes (including those from AppExchange Apps) seems to be executing. So deployment of a simple change could end up taking couple of minutes.
Is there a way in apex to "package" classes by namespace or something like that so that when we try to deploy a change, only the test methods relevant to that package are executed. If something like that exists, our deployment can happen much faster.
Unfortunately no, there is no partial testing for deployment of apex code, every change, no matter how minute or self-contained triggers a full test run. This among other things enforces code metrics (minimum total code coverage for instance)
IMHO, this is proving to be a two-sided coin when it comes to enforcing code reliability. When we started using apex all of our tests were very comprehensive performing actual testing of the code with lots of asserts and checks. Then we started having very very long deploy times so now our tests serve one and only function, satisfying minimum code coverage, and even with that simplification it takes almost 3 minutes to deploy anything and we only use 20% of our apex code allowance.
IMHO2, Apex is way too slow of a coding platform to be enforcing this kind of testing. I cant even imagine how long the tests would run if we reach 50% allowance, not to mention any more.
This is possible but you'll need to learn about Apache Ant and have a look at the Force.com Migration Toolkit. You can then use a Build file to determine which files are deployed as well as which tests are run.
I'm busy writing a whitepaper that'll touch on this and other related development strategies... I'll post to my blog when it's done.
If we use the apache ant migration tool we have many options for deployment
like
deployCodeFailingTest which will skip the test classes
and if you want to run only specific test classes
please use : something similar to this in ur build.xml
<target name="deployCode">
`<sf:deploy`
username="${sf.username}"
password="${sf.password}"
serverurl="${sf.serverurl}"
deployroot="codepkg">
<runTest>SampleDeployClass</runTest>
</sf:deploy>
</target>
for detailed reference please use this link
http://www.salesforce.com/us/developer/docs/daas/salesforce_migration_guide.pdf
I would recommend the following approach:
Git as repository for all your sf code
jenkins to deploy your code as CI/CD
PMD as the static code analyser
sfdx as the deployment method in jenkins for deployment.
Refer the trailhead link: https://trailhead.salesforce.com/users/strailhead/trailmixes/architect-dev-lifecycle-and-deployment