Rundeck on kubernetes can't do https [closed] - kubernetes

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
It seems my rundeck can't do https. I'm doing a ssl offload at a loadbalancer. The following is a snippet of my deployment yml
containers:
- name: rundeck
image:rundeck/rundeck:3.1.1
env:
- name: RUNDECK_GRAILS_URL
value: "https://rundeck.somehost.io"
- name: SERVER_SECURED_URL
value: "https://rundeck.somehost.io"
- name: RUNDECK_JVM_SETTINGS
value: "-Dserver.web.context=/rundeck -Drundeck.jetty.connector.forwarded=true"
I've follow most tips form the net but my rundeck still looking for http after login

You need to enable the ssl settings, for example:
args: ["-Dserver.https.port=4443 -Drundeck.ssl.config=/home/rundeck/server/config/ssl.properties"]
But you will need to add a certificate (for example a self-certificate) to the container.
You can try:
1) extend the Rundeck official image (like this )
2) create a volume with the certificate and mount it on /home/rundeck/etc/truststore (also you might need to mount the /home/rundeck/server/config/ssl.properties with the right password ). BTW, I haven't tried that

You need to define -Drundeck.ssl.config parameter and SSL port (-Dserver.https.port=4443) too in your Rundeck section (the example has HAproxy and MySQL as part of the container but you can use the Rundeck section).
This parameter point to a file with this content (with your paths and certificate, you've full SSL configuration explanation here)
keystore=/etc/rundeck/ssl/keystore
keystore.password=password
key.password=password
truststore=/etc/rundeck/ssl/truststore
truststore.password=password
You can check the entire example project here.
Alternatively, you can use this image maybe easiest to configure (check the "SSL" parameters).

Related

Skaffold.yaml not being parsed correctly [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have the following skaffold.yaml config file:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifest:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: hamza9899/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
But when I run skaffold dev I get the following error :
line 16: field manifest not found in type v2alpha3.KubectlDeploy```

could not determine a constructor for the tag '!GetAtt' [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I am writing a CFT for a website hosted on S3 - The YML file passes template-validate with no issues however the build agent returns the following error:
yaml.constructor.ConstructorError: could not determine a constructor for the tag '!GetAtt'
Outputs:
WebsiteURL:
Value: !GetAtt RootBucket.WebsiteURL
Description: URL for website hosted on S3
Try it without the shorthand version of Fn::GetAtt
Outputs:
WebsiteURL:
Value: Fn::GetAtt: [ RootBucket, WebsiteURL ]
Description: URL for website hosted on S3

Chef LWRP, Definition, or Cookbook for abstracting creation of Nginx virtual hosts

I'm trying to figure out the correct way to architect a solution to automatically configure new Rails App servers.
I've looked at the chef-rails cookbook and it seems a little verbose. In our case we always deploy Nginx a certain way, always perform backups a certain way, etc, so much of the configuration would be redundant from one node definition to the next.
My goal is to be able to create a new Rails App server by defining just the following information.
wh_webhead "test_app" do
ssl :enable
backups :enable
passenger :enable
ruby_version 2.0.0
db_type :mysql
db_user "testuser"
db_pass "3207496r9w6"
nagios_ssl_string_match "login"
end
Then I would like Chef to perform the following actions:
Create user accounts
Setup box and install
Install Nginx w/wildcard SSL cert
Configure log rotation
Setup firewall rules to allow traffic to ports 80 and 443
Install Passenger and RVM with Ruby 2.0.0
Create Rails app dirs following template (e.g. /opt/local/test_app)
Create new database on MySQL server, grant access, and setup firewall rules
Create firewall rules for Nagios and configure Nagios to monitor:
port 80 for redirection to port 443
port 443 for HTTP 200 status
port 443 for the text "login"
Configure backups for app dir (e.g. /opt/local/test_app)
I'm already using the community cookbooks for Nginx, Nagios, Ufw, etc and have created recipes in a custom cookbook to configure Mysql and Nginx. There's just a lot of duplicate code from one app's Nginx/Mysql cookbook to the next.
What I'm struggling with is where to use Cookbooks, Recipes, LWRPs and Definitions to properly abstract this.
Should I put the default configuration for Nginx and Mysql in Definitions and then use those in recipes or create custom wrapper cookbooks with the defaults?
First, take a look at the application_ruby and artifact cookbook, both of which can automate these workflows for you.
I specifically enjoy using the artifact cookbook, as it provides a lot of flexibility, but the application_ruby cookbook has built-in support for Passenger, Unicorn and other tools you'd normally find in a Rails application requirements.
As for your question regarding Cookbooks, Recipes, LWRPs and Definitions I would definitely look at #sethvargo's answer at https://stackoverflow.com/a/21733093/747032. It provides a good guide on what to use when, from an employee at Opscode (now called Chef (the company)), and someone who is constantly involved in the Chef community and thus has excellent knowledge on this topic.
As far as my advice (which I'll keep concise):
Use LWRP's to wrap a lot of resources that are always called together, for example, we use an "AWS EBS" LWRP, to create, mount and format new EBS'.
Use recipes to call on all your LWRP's (both custom and public) and resources.
Don't use definitions, they are really deprecated by LWRP's in my opinion.

How can I tell if postgres has kerberos installed? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am trying to configure postgres (8.4.13) to work with Kerberos. I cannot seem to get it work. The one "gotcha" I keep reading about is that postgres must be built with kerberos support. Well, the postgres I have is an rpm downloaded from the internet. How I can tell if this postgres was built with Kerberos support or not? Is there a way to list "installed components? Thanks!!!
pg_config might be helpful, e.g.:
pg_config --configure
Some binaries have command like options which will allow you to see how they were compiled. I'm not sure if postgres will do that. You can use --version (but this is generally very minimalistic) and --describe-config which will dump values like:
Security and Authentication STRING FILE:/usr/local/etc/postgresql/krb5.keytab
Sets the location of the Kerberos server key file
if kerberos capabilites are configured for the postgresql installation.
As well many packaging systems have methods for capturing compile time options that were passed to the build process (FreeBSD's pkg info -f for example does this). It's been a while since I have used rpm and newer versions may have methods for this sort of query directly on the binary. On rpm based systems I have administered I would keep the src.rpm and .spec files on hand in a local repository for each installed application. This was in order to comply with out in house "policies" :-) and to track the configure and OPTFLAGS settings, source code used in the build, etc.
Here is a response to a similar question:
https://serverfault.com/questions/36037/how-can-i-find-what-options-an-rpm-was-compiled-with
A generic UNIX method for seeing which libraries a binary is linked against is to use "ldd" like this:
$~/ ldd /usr/local/bin/postgres
/usr/local/bin/postgres:
libgssapi.so.3 => /usr/local/lib/libgssapi.so.3 (0x800a38000)
libxml2.so.5 => /usr/local/lib/libxml2.so.5 (0x800b74000)
libpam.so.5 => /usr/lib/libpam.so.5 (0x800dc4000)
libssl.so.6 => /usr/lib/libssl.so.6 (0x800ecc000)
libcrypto.so.6 => /lib/libcrypto.so.6 (0x80101f000)
libm.so.5 => /lib/libm.so.5 (0x8012bf000)
libc.so.7 => /lib/libc.so.7 (0x8013df000)
libintl.so.9 => /usr/local/lib/libintl.so.9 (0x801621000)
libheimntlm.so.1 => /usr/local/lib/libheimntlm.so.1 (0x80172a000)
libkrb5.so.26 => /usr/local/lib/libkrb5.so.26 (0x801830000)
libheimbase.so.1 => /usr/local/lib/libheimbase.so.1 (0x8019ad000)
libhx509.so.5 => /usr/local/lib/libhx509.so.5 (0x801ab1000)
libwind.so.0 => /usr/local/lib/libwind.so.0 (0x801bf9000)
libsqlite3.so.8 => /usr/local/lib/libsqlite3.so.8 (0x801d21000)
libasn1.so.8 => /usr/local/lib/libasn1.so.8 (0x801ec3000)
libcom_err.so.2 => /usr/local/lib/libcom_err.so.2 (0x802058000)
libiconv.so.3 => /usr/local/lib/libiconv.so.3 (0x80215b000)
libroken.so.19 => /usr/local/lib/libroken.so.19 (0x802358000)
libcrypt.so.5 => /lib/libcrypt.so.5 (0x802469000)
libthr.so.3 => /lib/libthr.so.3 (0x802589000)
libz.so.5 => /lib/libz.so.5 (0x8026a2000)
As you can see on my system the postgresql binary is linked against libkrb5.so.26, libgssapi.so.3, libheimntlm.so.1 etc. (these are Heimdal Kerberos libraries).
[EDIT: I still think Milen's response is most likely the best, most thorough and recommended approach BUT one caveat I ran into just today: on most of my systems (most of these are FreeBSD) pg_config appears to be installed with the postgresql-client pkg and so can potentially have different options set than what is selected when the postgresql-server is built. I tend to build lots of functionality into the clients so they can connect to a range of servers which are often running on different machines. The package with the client command line shell and libraries (postgresql-devel in most RPM-based or Linux systems) is what give capabilities to database modules and libraries for python, perl, etc. that connect to your DB server. The client libraries often reside on a separate host when you have a web application that is grabbing and storing data (CRUD) in a database back end. That said, most likely binary client/server/devel packages are built with the same options set ;-)
Anyway, just another data point ... cheers.]
Hope that helps.
Cheers,

crossdomain.xml not found in ejabberd [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have configured the ejabberd sever but i am not be able to access http://www.example.com:5280/crossdomain.xml
i have set the following parameters in ejabberd.cfg
Listners
{5280, ejabberd_http, [
{access,all},
{request_handlers,
[
{["pub", "archive"], mod_http_fileserver},
{["xmpp-http-bind"], mod_http_bind}
]},
%% captcha,
http_bind,
http_poll,
register,
web_admin
]}
Modules
{mod_http_fileserver, [
{docroot, "/var/log/ejabberd/"},
{accesslog, "/var/log/ejabberd/access.log"},
{content_types,[{".xml, text/xml"}]}
]},
crossdomain.xml is present at this path in centos "/var/log/ejabberd/"
can anyone help in resolving this issues , i heard that for crossdomain.xml we can also configure apache webserver , but i don't know how to do that ?
I guess you are using Strophe with ejabberd. The crossdomain.xml has nothing to do with ejabberd, it has to do with configuring flash to do cross domain requests.
Of course you don't need flash and it's better to avoid that altogether by means of using a proxy in front. You can use apache or nginx or any other.
Here is a tutorial for nginx.