How to run kubernetes e2e tests? - kubernetes

I have a kubernetes cluster running with one master and 2 nodes. I want to run e2e tests on this cluster. How should I run it? I tried doing go run hack/e2e.go -v --test but that command wants to create a cluster first and then run the test, while I want to run the tests on my already present cluster. Any idea how should I go ahead with it or what parameters should I pass to e2e tests?
TIA.

If what you want to do is run the conformance tests and verify your cluster, you might also consider looking into the tool that Heptio created called sonobuoy, which was created specifically to run the non-destructive conformance tests for Kubernetes 1.7 (or later) in a consistent fashion. Lachlan Everson posted a 6 minute youtube video showing how to use it that I thought was pretty easy to follow, and will get you up and running with it very quickly.
It's configuration driven, so you can turn on/off tests that interest you easily, and includes some plugin driven "get more data about this cluster" sort of setup if you find you want or need to dig more in specific areas.

Use the conformance test, described here:
https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-tests.md#conformance-tests

An updated link you can find it here: https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-tests.md or you can now use kubetest to run e2e tests.
Update: The easiest way to run e2e tests is by using Heptio's scanner

I use this command:
docker run -v $HOME/.kube/config:/kubeconfig \
--env KUBECONFIG=/kubeconfig \
k8s.gcr.io/conformance-amd64:v1.14.1 \
/usr/local/bin/ginkgo \
--focus="\[Conformance\]" \
--skip="Alpha|\[(Disruptive|Feature:[^\]]+|Flaky)\]" \
--noColor=false \
--flakeAttempts=2 \
/usr/local/bin/e2e.test -- \
--repo-root=/kubernetes \
--provider="skeleton" \
--kubeconfig="/kubeconfig" \
--allowed-not-ready-nodes=1

You can run the conformance e2e tests as described here:
https://github.com/cncf/k8s-conformance/blob/master/instructions.md
if your cluster is running 1.7.X or 1.8.x this approach is easy.
Basically you can run
curl -L https://raw.githubusercontent.com/cncf/k8s-conformance/master/sonobuoy-conformance.yaml | kubectl apply -f -

Related

Kubernetes cron job to run a query on couchbase database

I require to run a N1QL query on a couchbase cluster via Kubernetes job periodically. Can CBQ tool be used? is there any such reference implementation available?
You can use cbq shell
OR
curl -v http://queryhost:8093/query/service -H "Content-Type: application/json" -d '{"statement":"select * from default where k1 = $name&$name=1;"}'
You can use Eventing and run SQL++ (N1QL) via a periodic timer in the eventing service.
This can make you system more portable and eliminate dependencies outside of Couchbase.
Refer to:
https://docs.couchbase.com/server/current/eventing/eventing-handler-basicN1qlSelectStmt.html
https://docs.couchbase.com/server/current/eventing/eventing-examples-recurring-timer.html

Execute AWS ECS run-task with network configuration overrides

I'm running a task in AWS ECS using the CLI command run-task.
I'm successfully running a task as follows:
aws ecs run-task --cluster ${stackName}-cluster \
--task-definition ${stackName}-${tag} \
--launch-type="FARGATE" \
--network-configuration '{ "awsvpcConfiguration": { "assignPublicIp":"DISABLED", "securityGroups": ["sg-......"], "subnets": ["subnet-.....","subnet-.....","subnet-......"]}}' \
--count 1 \
--profile ${profile} \
--overrides file://overrides.json
They way I understand it, if you're using FARGATE you must to have NetworkMode: awsvpc in your TaskDefinition, and you need to specify the awsvpcConfiguration every time you run a task. This is all fine.
However, to make the above invocation tidier, is there a way to pass the --networkConfiguration above as an override. The documentation says you can pass environment variables, but it's not clear if this includes network.
I would be very grateful to anybody who could shed some light on this.
No you can't do that. Here's the full list of things you can specify in ECS Task Overrides. Network configuration is not in that list.
The documentation says you can pass environment variables, but it's not clear if this includes network.
The network configuration is not an environment variable.
If you just want to be able to simplify the the command line by passing in more arguments from a file, you can use the --cli-input-json
or --cli-input-yaml arguments.

AWS CLI 2 can't update-service use CLI

I have a cluster on ecs everything works well! When I used aws cli v.1, I could update my service used a command like this aws ecs update-service --cluster [cluster-name] --service [service-name] --task-definition [task-name] --force-new-deployment. After updating CLI to v.2 I try to use this command and everything just stuck! I didn't find any changes in aws documentation. Do you have any ideas?
update:
my screenshot
the problem is that everything starts well, without errors or warnings, it just gets stuck!
The AWS CLI version 2 uses a client-side pager. As stated in the documentation, you can "use the --no-cli-pager command line option to disable the pager for a single command use" (or use any of the other options described there).
It worked. What you are seeing is the output as documented. You can scroll through the output using up and down arrow keys or hit the Q key to dismiss the output.
Alternately, you can redirect the output to a file for review later:
aws ecs update-service ... > result.json
Or just ignore the output entirely:
aws ecs update-service ... > /dev/null

Deployment(CI-CD) pipeline for serverless application

I have created a simple node express MongoDB app which has 3 API endpoints to perform basic crud operations.
If I was to deploy this to Heroku as a service and use bitbucket-pipeline to perform CI-CD this would do the job for me. On top of this, I can have Heroku pipelines to have multiple stages of environments like dev and production.
And after doing all above I would be done with my pipeline and happy about it.
Now coming back to Serverless, I have deployed my API endpoints to AWS as lambda functions, And that is the only environment (let's say DEV) present at the moment.
Now how can I achieve a pipeline similar to the one mentioned earlier in a serverless architecture?
All the solutions out there do not suggest (maybe I missed some) promoting the actual code which is tried and tested on dev env to Production. But rather a deploy a new set of code, is this a limitation?
Option 1
Presuming that you are developing a Node Serverless application, deploying a new set of code with the same git commit ID and package-lock.json/yarn.lock should result in the same environment. This can be achieved by executing multiple deploy commands to different stages e.g.
sls deploy -s dev
sls deploy -s prod
There are various factors that may cause the deployed environments to be different, but the risk of that should be very low. This is the simplest CI/CD solution you can implement.
Option 2
If you'd like to avoid the risk from Option 1 at all cost, you can split the package and deployment phase in your pipeline. Create the package before you deploy from the codebase that you have checked out:
sls package -s dev --package build/dev
sls package -s prod --package build/prod
Archive as necessary, then to deploy:
sls deploy -s dev --package build/dev
sls deploy -s prod --package build/prod
Option 3
This is an improved version of Option 2. I have not tried this solution but it should theoretically be possible. The problem with Option 2 is that you have to execute the package command multiple times which might not be desirable YMMV. To avoid the need of packaging more than once, first create the package:
sls package -s dev --package build
Then to deploy:
# Execute a script to modify build/cloudformation-template-update-stack.json to match dev environment
sls deploy -s dev --package build
# Execute a script to modify build/cloudformation-template-update-stack.json to match prod environment
sls deploy -s prod --package build
If you have the following resource in build/cloudformation-template-update-stack.json for example:
"MyBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "myapp-dev-bucket"
}
},
The result of the script you execute before sls deploy should modify the CF resource to:
"MyBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "myapp-prod-bucket"
}
},
This option of course will imply that you can't have any hardcoded resource name in your app, every resource names must be injected from serverless.yml to your Lambdas.

Opsworks Chef 12 recipes

Has anyone attempted to convert Opsworks Chef v11 recipes to Chef v12?
Im running multiple stacks on Chef 11 and decided to start converting some of them to Chef 12. Since AWS dropped their opsworks app layers, such as rails layer recipes, we (opsworks users) are now responsible for creating deploy user, git checkout repos into deploy_to, etc.
Its all good with flexibility and no more namespace conflicts, but we are missing all that good stuff opsworks was giving us for free.
Wonder if someone converted recipes for Chef 12 and open-sourced? Otherwise, is community interested in these recipes at all? Im pretty sure Im not alone here.
Thank you in advance!
The opsworks_ruby cookbook on the Supermarket is basically everything you need. It even puts the apps into the same directories (i.e. /srv/www/app_name/), sets up your database.yml, etc, etc.
The main difference between this recipe and other non-OpsWorks recipes is that this will pull everything out of the OpsWorks configuration for you. You don't have to customize the recipes, just make sure your app and layers are named correctly - it'll build everything from there - including your RDS configuration for the database.yml!
The main difference is that the layers in OpsWorks won't be "Ruby aware" so you won't have fields for Rails-ish or Ruby-ish things and instead will need to manage those elsewhere. The way ENV vars is loaded is a little different too.
Also be sure to read up on AWS' implementation of Chef 12 for OpsWorks. They technically have two chef cookbooks running, their internal one and yours. Theirs include managing the agent is up-to-date, loading up users (for ssh), wiring up monitoring, etc. You'll have to manage the rest.
We've either replaced stuff from their huge cookbook with individual cookbooks from the Supermarket or just rewrote it. For instance, the old Chef 11 opsworks_initial_setup had a couple things around tweaking network and linux settings - we recreated that.
It also does use deploy users as applicable, for instance:
$ ps -eo user,command
USER COMMAND
// snip
root nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
aws opsworks-agent: master 10820
aws opsworks-agent: keep_alive of master 10820
aws opsworks-agent: statistics of master 10820
aws opsworks-agent: process_command of master 10820
deploy unicorn_rails master --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[0] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[1] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[2] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[3] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
nginx nginx: worker process
nginx nginx: worker process
Just a small example of the process output but root boots things as needed and each process utilizes their own users to limit rights and access.
I think the most common way is to use the "application" cookbook from the supermarket: https://supermarket.chef.io/cookbooks/application/versions/4.1.6 (which is also based on Poise). Attention: use version ~4, they removed almost all of the good features in v5.
It will create the directory structure, supports different deploy-strategies and offers some events to hook.
Be aware: In my opinion, the Opsworks documentation is semi-good when it comes to the "deploy with opsworks and chef12" topic: The information from the gui (like repo-url etc), is not on the node object but in a databag from the application. For debugging it can be very helpful to have a look into the /var/chef/runs/<run-id>/ directory to see what is available from where.
Small snippet that shows the idea:
app = search("aws_opsworks_app").first
application "#{app['shortname']}" do
owner 'root'
group 'root'
repository app['app_source']['url']
revision 'master'
path "/srv/#{app['shortname']}"
end
This will create the releases/current directory structure on /srv and checkout the code. Note: you could think that ssh-key you specify in the GUI is somehow automatically put in the proper place. Its not, you'll have to take care of that on your own. Check the chef11 opsworks cookbook: https://github.com/aws/opsworks-cookbooks/blob/release-chef-11.10/scm_helper/libraries/git.rb
I don't know about the old OpsWorks cookbooks but check out https://github.com/poise/application_examples/ for some examples of doing doing Rails (and more) deploys using plain Chef (will work on OpsWorks too).