Provide Proxy Information to Job - concourse

Wondering if anyone has come across this: Is it possible to provide proxy information to Concourse job? Something along lines of this:
- name: bosh-deploy-0
...
jobs:
- name: deploybosh
properties:
http_proxy_url: <http_proxy_url>:<http_proxy_port>
https_proxy_url: <https_proxy_url>:<http_proxy_port>
no_proxy:
- localhost
- 127.0.0.1
If anyone has a working example, I'd be very much appreaciative!!

You can only set these properties per worker. https://github.com/concourse/concourse-bosh-release/blob/v4.2.1/jobs/worker/spec#L142-L153.
If you want a job to run with specific proxy information set, you need to
Deploy a worker with those properties set, and with some worker tag.
Configure every step of the job with that same tag.

You could also set the proxy settings at the beginning of your job task (and optionally pass the proxy endpoint with parameters or a config server backend). That's maybe not the nicest way, however, it works quite well.

Related

Specify postgresql database name in cloud foundry manifest.yml

Is there a way to specify a postgresql database name to connect to in the cloud foundry manifest.yml file? I've been raking through the documentation and haven't yet found this specific information.
I'm imagining something like this:
applications:
- name: my-app
routes:
- route: my-app.mybluemix.net
services:
- postgres
dbname: database2
With that approach, a postgresql connection can be made by just the connection string provided by VCAP_SERVICES parsing modules (cfenv in the case of node).
If this is not possible, I will just set a dbname environment variable and build my own connection string.
There is nothing like that in a Cloud Foundry application manifest.yml.
The manifest.yml only takes a list of service instance names and the services with those names will be bound to your app. It does not allow you set other metadata.
https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html#services-block
I don't know if these will help, but when you cf bind-service directly there are two additional provisions you can make use of (these are not supported by manifest.yml as of me writing this):
Arbitrary bind parameters. These probably won't help unless your service broker supports them, but it's a way to pass additional info to the service broker. If your broker supported it, you could in theory say give me a database named XYZ by passing it some config this way.
Named service bindings. This provides what amounts to a second name. The intent is that you can create the service with a name of X, but your application can look for a service binding with name Y. You can use this to swap in differently named services, but still expose the same binding name to the application so it will always find the service.
If you are trying to pass in some other service instance related metadata to your application, you'd need to do it some other way. Like if you want to tell it the database name or the connection pool size, etc.. Using environment variables like you mentioned is one option. You could use a config file or cli arguments passed to your application. What you pick is probably a matter of preference/support in the library/framework you're using.
For what it's worth, most service brokers I've seen pass in and tell you a specific database name to use. If the broker said connect to db XYZ and you made your connection to myCoolDb, the connection would fail. Just wanted to mention this. Your mileage may vary.

How to set API Server parameters on kubespray deployment

I am using kubespray for the deployment of a kubernetes cluster and
want to set some API Server parameters for the deployment. In specific I want to configure the authentication via OpenID Connect (e.g set the oidc-issuer-url parameter). I saw that kubespray has some vars to set (https://github.com/kubernetes-sigs/kubespray/blob/master/docs/vars.md), but not the ones I am looking for.
Is there a way to set these parameters via kubespray? I don't want to configure each master manually (e.g by editing the /etc/kubernetes/manifests/kube-apiserver.yaml files).
Thanks for your help
On the bottom of the page you are referring to there is description how to define custom flags for various components of k8s:
kubelet_custom_flags:
- "--eviction-hard=memory.available<100Mi"
- "--eviction-soft-grace-period=memory.available=30s"
- "--eviction-soft=memory.available<300Mi"
The possible vars are:
apiserver_custom_flags
controller_mgr_custom_flags
scheduler_custom_flags
kubelet_custom_flags
kubelet_node_custom_flags
The k8s-cluster.yml file has some parameters which allow to set the OID configuration:
kube_oidc_auth: true
...
kube_oidc_url: https:// ...
kube_oidc_client_id: kubernetes
kube_oidc_ca_file: "{{ kube_cert_dir }}/ca.pem"
kube_oidc_username_claim: sub
kube_oidc_username_prefix: oidc:
kube_oidc_groups_claim: groups
kube_oidc_groups_prefix: oidc:
These parameters are the counter parts to the oidc api server parameters

Why does Concourse `get` a resource after `put`ing it?

When I configure the following pipeline:
resources:
- name: my-image-src
type: git
source:
uri: https://github.com/concourse/static-golang
- name: my-image
type: docker-image
source:
repository: concourse/static-golang
username: {{username}}
password: {{password}}
jobs:
- name: "my-job"
plan:
- get: my-image-src
- put: my-image
After building and pushing the image to the Docker registry, it subsequently fetches the image. This can take some time and ultimately doesn't really add anything to the build. Is there a way to disable it?
Every put implies a get of the version that was created. There are a few reasons for this:
The primary reason for this is so that the newly created resource can be used by later steps in the build plan. Without the get there is no way to introduce "new" resources during a build's execution, as they're all resolved to a particular version to fetch when the build starts.
There are some side-benefits to doing this as well. For one, it immediately warms the cache on one worker. So it's at least not totally worthless; later jobs won't have to fetch it. It also acts as validation that the put actually had the desired effect.
In this particular case, as it's the last step in the build plan, the primary reason doesn't really apply. But we didn't bother optimizing it away since in most cases the side benefits make it worth not having the secondary question arise ("why do only SOME put steps imply a get?").
It also cannot be disabled as we resist adding so many knobs that you'll want to turn one day and then have to go back and turn back off once you actually do need it back to the default.
Docs: https://concourse-ci.org/put-step.html

CQ5. How to know host name of publisher where code executes?

To resolve problem mentioned in subject I wrote following code:
String link = externalizer.publishLink(resolverFactory.getAdministrativeResourceResolver(null),"");
I cannot check it because I have only author machine but following code will executes only on publishers.
On production we have several publisher.I want to get different results for every publisher.
Will my code work on publishers?
Have you defined sling:osgiConfig for the pid - com.day.cq.commons.impl.ExternalizerImpl?
You could configure this in OSGi console [1] directly as well.
In the configuration, you could supply dns name like 'publish http://www.example.com'
In case of multiple domain names for multiple publish instances, define sling:osgiConfig nodes for this service and attach it to 'run modes' of those publish instances. This should work.
On side note - Externalizer service is generally used for non-HTML content like email, etc. In HTML, you could use relative urls.
[1] http://localhost:4502/system/console/configMgr

How can I make chef restart a service with additional parameters passed in?

I have a template for a Rails site for Sphinx configuration. There can be multiple different Sphinx services on the same machine running on different ports, one per app. Therefore, I I only want to restart Sphinx for each site if their corresponding configuration template changes. I've created an /etc/init.d/sphinx script that restarts just one sphinx based on a parameter similar to:
/etc/init.d/sphinx restart /etc/sphinx/site1.conf
Where site1.conf is defined by a Chef template. I'd really love to use the notifies functionality for Chef Templates to pass in the correct site1.conf parameter if the template changes. Is this possible?
Alternatively, I suppose I could just register a different service for each site similar to:
/etc/init.d/sphinx_site1
However, I'd prefer to pass in the parameters to the script instead.
When defining a service resource, you can customize the start, stop, and restart commands that will be run. You can define a service resource for each site that you have using these customized commands and set up their corresponding notifications.
For example:
service "sphinx_site1" do
supports :restart => true
restart_command "/etc/init.d/sphinx restart /etc/sphinx/site1.conf"
action :nothing
end
template "/etc/sphinx/site1.conf" do
notifies :restart, "service[sphix_site1]"
end