this is how im setting up the entry
cat << EOF > /tmp/route53-healthcheck.json
{
"IPAddress": "10.10.10.10",
"Port": 80,
"Type": "HTTP",
"ResourcePath": "/somefile.txt"
}
EOF
aws route53 create-health-check \
--caller-reference $(date +'%Y%m%dT%H%M%d') \
--health-check-config file:///tmp/route53-healthcheck.json > /tmp/route53-healthcheck.log
and when i see the route53 entry, its missing the name (the first entry is a manual entry and the second one is from the snippet above. Im referring to the red smudge.)
All the options listed in the docs are not relevant.
{
"IPAddress": "string",
"Port": integer,
"Type": "HTTP"|"HTTPS"|"HTTP_STR_MATCH"|"HTTPS_STR_MATCH"|"TCP"|"CALCULATED",
"ResourcePath": "string",
"FullyQualifiedDomainName": "string",
"SearchString": "string",
"RequestInterval": integer,
"FailureThreshold": integer,
"MeasureLatency": true|false,
"Inverted": true|false,
"HealthThreshold": integer,
"ChildHealthChecks": ["string", ...]
}
Any ideas if there is another way to set the name of that in another way ?
Solution
aws route53 change-tags-for-resource --resource-type healthcheck --resource-id 41633bb1-4adc-4357-983f-767191ff3248 --add-tags Key=Name,Value="new-name"
Some mistakes i made:
my aws version was old. On ubuntu, i had to apt-get remove awscli and then install the latest version from pip with pip install awscli. Then the executable can be found in ~/.local/bin/aws
When i changed the name, i had to force reload the webpage instead of just refresh it with the aws icon (think Ctrl+Shift+R).
You need to use the change-tags-for-resource CLI option to set a tag on a resource[1].
Example:
aws route53 change-tags-for-resource --resource-type healthcheck --resource-id <healthcheck guid> --add-tags Key=Name;Value=Value
http://docs.aws.amazon.com/cli/latest/reference/route53/change-tags-for-resource.html
Related
One can currently connect jupyter to an existing environment, if one installs ipykernel in that particular environment first (and then creates a "kernel" for that environment").
My question is how can that be achieved without touching the environment.
I tried creating a kernelspec.json file manually:
"argv": [
"/path/to/envs/myenv/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "myenv",
"language": "python",
"metadata": {
"debugger": true
}
}
but that doesn't work.
Any hints (even regarding why my request is not sensible) are appreciated.
I've create the following task in my ansible playbook.
- name: Create a k8s namespace
k8s:
state: present
definition:
apiVersion: v1
kind: Secret
metadata:
name: bigip-login
namespace: kube-system
data:
password: dGVzdA==
username: YWRtaW4=
type: Opaque
However when I run my playbook I got the following error:
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_k8s_payload_n071fcyu/ansible_k8s_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/common.py", line 92, in <module>
from kubernetes.dynamic.resource import ResourceInstance
ModuleNotFoundError: No module named 'kubernetes'
fatal: [master.madebeen.com]: FAILED! => {
"changed": false,
"error": "No module named 'kubernetes'",
"invocation": {
"module_args": {
"api_key": null,
"api_version": "v1",
"append_hash": false,
"apply": false,
"ca_cert": null,
"client_cert": null,
"client_key": null,
"context": null,
"continue_on_error": false,
"definition": {
"apiVersion": "v1",
"data": {
"password": "VGFyLk1pZC5GdW4tNDU2",
"username": "YWRtaW4="
},
"kind": "Secret",
"metadata": {
"name": "bigip-login",
"namespace": "kube-system"
},
"type": "Opaque"
},
"delete_options": null,
"force": false,
"host": null,
"kind": null,
"kubeconfig": null,
"label_selectors": null,
"merge_type": null,
"name": null,
"namespace": null,
"password": null,
"persist_config": null,
"proxy": null,
"proxy_headers": null,
"resource_definition": {
"apiVersion": "v1",
"data": {
"password": "VGFyLk1pZC5GdW4tNDU2",
"username": "YWRtaW4="
},
"kind": "Secret",
"metadata": {
"name": "bigip-login",
"namespace": "kube-system"
},
"type": "Opaque"
},
"src": null,
"state": "present",
"template": null,
"username": null,
"validate": null,
"validate_certs": null,
"wait": false,
"wait_condition": null,
"wait_sleep": 5,
"wait_timeout": 120
}
},
"msg": "Failed to import the required Python library (kubernetes) on master's Python /usr/bin/python3. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"
According to the example provided here that should've worked. I have also tried the following suggested (without any success) due to not having the json file provided here as an example:
---
apiVersion: v1
data:
password: dGVzdA==
username: YWRtaW4=
kind: Secret
metadata:
name: bigip-login
namespace: kube-system
type: Opaque
What intrigues me is the fact that both community/core kubernetes versions are currently installed:
marlon#ansible:~/.ansible$ ansible-galaxy collection install community.kubernetes
Process install dependency map
Starting collection install process
Skipping 'community.kubernetes' as it is already installed
marlon#ansible:~/.ansible$ ansible-galaxy collection install kubernetes.core
Process install dependency map
Starting collection install process
Skipping 'kubernetes.core' as it is already installed
marlon#ansible:~/.ansible$
Here is my python version that ansible is currently using:
marlon#ansible:~$ python3 --version
Python 3.8.10
marlon#ansible:~$ ansible --version | grep "python version"
python version = 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0]
marlon#ansible:~$
Installed ubuntu like recommended on ansible installation file:
$ sudo apt update
$ sudo apt install software-properties-common
$ sudo add-apt-repository --yes --update ppa:ansible/ansible
$ sudo apt install ansible
Do you have any suggestions for use cases 1 and 2 so we can once and for all leave it here for future reference for others to benefit from them?
This error
"Failed to import the required Python library (kubernetes) on master's Python /usr/bin/python3.
means you don't have kubernetes module installed. Normally you could solve this problem by executing a command
pip3 install kubernetes
However, you are using an ansible, so you will have to take a different approach. Try to add this dependency to your system image. A similar question has already been asked here.
The problem was with a different module, but the procedure is the same for you as well.
You can find an example system image definition here. (Note, that this guy use Python 2 and your version is Python 3).
In your situation, you will have to put the command
pip3 install kubernetes
in your system image definition. If you are using the base system image, try to create your custom by adding the line as above. This Python dependency should be coded and installed into the image before it can be used by Ansible.
I'm trying to understand how to properly use match property from this tutorial but, frankly, I don't understand what I need to do, because every time I do cmd+shift+P "Docker: Compose Up" I still have to select the configuration file to run.
In my example, I have:
"docker.commands.composeUp": [
{
"label": "override",
"template": "docker-compose -f docker-compose.yml docker-compose.override.yml up -d --build",
"match": "override"
},
{
"label": "debug",
"template": "docker-compose -f docker-compose.yml docker-compose.debug.yml up -d --build",
"match": "debug"
}
So, how is regex working here?
When running this command:
kubectl apply -f tenten
I get this error:
unable to decode "tenten\.angular-cli.json": Object 'Kind' is missing in '{
"project": {
"$schema": "./node_modules/#angular/cli/lib/config/schema.json",
"name": "tenten"
},
"apps": [{
"root": "src/main/webapp/",
"outDir": "target/www/app",
"assets": [
"content",
"favicon.ico"
],
"index": "index.html",
"main": "app/app.main.ts",
"polyfills": "app/polyfills.ts",
"test": "",
"tsconfig": "../../../tsconfig.json",
"prefix": "jhi",
"mobile": false,
"styles": [
"content/scss/vendor.scss",
"content/scss/global.scss"
],
"scripts": []
}],
It looks like you're running this from the parent directory of your applications. You should 1) create a directory that's parallel to your applications and 2) run yo jhipster:kubernetes in it. Then run kubectl apply -f tenten in that directory after you've built and pushed your docker images. For example, here's the output when I run it from the kubernetes directory in my jhipster-microservices-example project.
± yo jhipster:kubernetes
_-----_
| | ╭──────────────────────────────────────────╮
|--(o)--| │ Update available: 2.0.0 (current: 1.8.5) │
`---------´ │ Run npm install -g yo to update. │
( _´U`_ ) ╰──────────────────────────────────────────╯
/___A___\ /
| ~ |
__'.___.'__
´ ` |° ´ Y `
⎈ [BETA] Welcome to the JHipster Kubernetes Generator ⎈
Files will be generated in folder: /Users/mraible/dev/jhipster-microservices-example/kubernetes
WARNING! kubectl 1.2 or later is not installed on your computer.
Make sure you have Kubernetes installed. Read http://kubernetes.io/docs/getting-started-guides/binary_release/
Found .yo-rc.json config file...
? Which *type* of application would you like to deploy? Microservice application
? Enter the root directory where your gateway(s) and microservices are located ../
2 applications found at /Users/mraible/dev/jhipster-microservices-example/
? Which applications do you want to include in your configuration? (Press <space> to select, <a> to toggle all, <i> to i
nverse selection)blog, store
JHipster registry detected as the service discovery and configuration provider used by your apps
? Enter the admin password used to secure the JHipster Registry admin
? What should we use for the Kubernetes namespace? default
? What should we use for the base Docker repository name? mraible
? What command should we use for push Docker image to repository? docker push
Checking Docker images in applications' directories...
ls: no such file or directory: /Users/mraible/dev/jhipster-microservices-example/blog/target/docker/blog-*.war
identical blog/blog-deployment.yml
identical blog/blog-service.yml
identical blog/blog-postgresql.yml
identical blog/blog-elasticsearch.yml
identical store/store-deployment.yml
identical store/store-service.yml
identical store/store-mongodb.yml
conflict registry/jhipster-registry.yml
? Overwrite registry/jhipster-registry.yml? overwrite this and all others
force registry/jhipster-registry.yml
force registry/application-configmap.yml
WARNING! Kubernetes configuration generated with missing images!
To generate Docker image, please run:
./mvnw package -Pprod docker:build in /Users/mraible/dev/jhipster-microservices-example/blog
WARNING! You will need to push your image to a registry. If you have not done so, use the following commands to tag and push the images:
docker image tag blog mraible/blog
docker push mraible/blog
docker image tag store mraible/store
docker push mraible/store
You can deploy all your apps by running:
kubectl apply -f registry
kubectl apply -f blog
kubectl apply -f store
Use these commands to find your application's IP addresses:
kubectl get svc blog
See the end of my blog post Develop and Deploy Microservices with JHipster for more information.
Trying to fully automate Heroku's Review Apps (beta) for an app. Heroku wants us to use db/seeds.rb to seed the recently spun up instance's DB.
We don't have a db/seeds.rb with this app. We'd like to set up a script to copy the existing DB from the current parent (staging) and use that as the DB for the new app under review.
This I can do manually:
heroku pg:copy myapp::DATABASE_URL DATABASE_URL --app myapp-pr-1384 --confirm myapp-pr-1384
But I can't get figure out how to get the app name that Heroku creates into the postdeploy script.
Anyone tried this and know how it might be automated?
I ran into this same issue and here is how I solved it.
Set up the database url you want to copy from as an environment variable on the base app for the pipeline. In my case this is STAGING_DATABASE_URL. The url format is postgresql://username:password#host:port/db_name.
In your app.json file make sure to copy that variable over.
In your app.json provision a new database which will set the DATABASE_URL environment variable.
Use the following script to copy over the database pg_dump $STAGING_DATABASE_URL | psql $DATABASE_URL
Here is my app.json file for reference:
{
"name": "app-name",
"scripts": {
"postdeploy": "pg_dump $STAGING_DATABASE_URL | psql $DATABASE_URL && bundle exec rake db:migrate"
},
"env": {
"STAGING_DATABASE_URL": {
"required": true
},
"HEROKU_APP_NAME": {
"required": true
}
},
"formation": {
"web": {
"quantity": 1,
"size": "hobby"
},
"resque": {
"quantity": 1,
"size": "hobby"
},
"scheduler": {
"quantity": 1,
"size": "hobby"
}
},
"addons": [
"heroku-postgresql:hobby-basic",
"papertrail",
"rediscloud"
],
"buildpacks": [
{
"url": "heroku/ruby"
}
]
}
An alternative is to share the database between review apps. You can inherit DATABASE_URL in your app.json file.
PS: This is enough for my case which is a small team, keep in mind that maybe is not enough for yours. And, I keep my production and test (or staging, or dev, whatever you called it) data separated.
Alternatively:
Another solution using pg_restore, thanks to
https://gist.github.com/Kalagan/1adf39ffa15ae7a125d02e86ede04b6f
{
"scripts": {
"postdeploy": "pg_dump -Fc $DATABASE_URL_TO_COPY | pg_restore --clean --no-owner -n public -d $DATABASE_URL && bundle exec rails db:migrate"
}
}
I ran into problem after problem trying to get this to work. This postdeploy script finally worked for me:
pg_dump -cOx $STAGING_DATABASE_URL | psql $DATABASE_URL && bundle exec rails db:migrate
I see && bundle exec rails db:migrate as part of the postdeploy step in a lot of these responses.
Should that actually just be bundle exec rails db:migrate in the release section of app.json?