crossdomain.xml not found in ejabberd [closed] - xmpp

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have configured the ejabberd sever but i am not be able to access http://www.example.com:5280/crossdomain.xml
i have set the following parameters in ejabberd.cfg
Listners
{5280, ejabberd_http, [
{access,all},
{request_handlers,
[
{["pub", "archive"], mod_http_fileserver},
{["xmpp-http-bind"], mod_http_bind}
]},
%% captcha,
http_bind,
http_poll,
register,
web_admin
]}
Modules
{mod_http_fileserver, [
{docroot, "/var/log/ejabberd/"},
{accesslog, "/var/log/ejabberd/access.log"},
{content_types,[{".xml, text/xml"}]}
]},
crossdomain.xml is present at this path in centos "/var/log/ejabberd/"
can anyone help in resolving this issues , i heard that for crossdomain.xml we can also configure apache webserver , but i don't know how to do that ?

I guess you are using Strophe with ejabberd. The crossdomain.xml has nothing to do with ejabberd, it has to do with configuring flash to do cross domain requests.
Of course you don't need flash and it's better to avoid that altogether by means of using a proxy in front. You can use apache or nginx or any other.
Here is a tutorial for nginx.

Related

Skaffold.yaml not being parsed correctly [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have the following skaffold.yaml config file:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifest:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: hamza9899/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
But when I run skaffold dev I get the following error :
line 16: field manifest not found in type v2alpha3.KubectlDeploy```

Rundeck on kubernetes can't do https [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
It seems my rundeck can't do https. I'm doing a ssl offload at a loadbalancer. The following is a snippet of my deployment yml
containers:
- name: rundeck
image:rundeck/rundeck:3.1.1
env:
- name: RUNDECK_GRAILS_URL
value: "https://rundeck.somehost.io"
- name: SERVER_SECURED_URL
value: "https://rundeck.somehost.io"
- name: RUNDECK_JVM_SETTINGS
value: "-Dserver.web.context=/rundeck -Drundeck.jetty.connector.forwarded=true"
I've follow most tips form the net but my rundeck still looking for http after login
You need to enable the ssl settings, for example:
args: ["-Dserver.https.port=4443 -Drundeck.ssl.config=/home/rundeck/server/config/ssl.properties"]
But you will need to add a certificate (for example a self-certificate) to the container.
You can try:
1) extend the Rundeck official image (like this )
2) create a volume with the certificate and mount it on /home/rundeck/etc/truststore (also you might need to mount the /home/rundeck/server/config/ssl.properties with the right password ). BTW, I haven't tried that
You need to define -Drundeck.ssl.config parameter and SSL port (-Dserver.https.port=4443) too in your Rundeck section (the example has HAproxy and MySQL as part of the container but you can use the Rundeck section).
This parameter point to a file with this content (with your paths and certificate, you've full SSL configuration explanation here)
keystore=/etc/rundeck/ssl/keystore
keystore.password=password
key.password=password
truststore=/etc/rundeck/ssl/truststore
truststore.password=password
You can check the entire example project here.
Alternatively, you can use this image maybe easiest to configure (check the "SSL" parameters).

could not determine a constructor for the tag '!GetAtt' [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I am writing a CFT for a website hosted on S3 - The YML file passes template-validate with no issues however the build agent returns the following error:
yaml.constructor.ConstructorError: could not determine a constructor for the tag '!GetAtt'
Outputs:
WebsiteURL:
Value: !GetAtt RootBucket.WebsiteURL
Description: URL for website hosted on S3
Try it without the shorthand version of Fn::GetAtt
Outputs:
WebsiteURL:
Value: Fn::GetAtt: [ RootBucket, WebsiteURL ]
Description: URL for website hosted on S3

Downloading a web page through proxy [duplicate]

This question already has an answer here:
How can I handle proxy servers with LWP::Simple?
(1 answer)
Closed 6 years ago.
I am trying to download a webpage through a proxy connection with the following code:
use LWP::Simple qw(get);
my $url = 'https://www.random-site.com';
my $html = get $url or die "sorry, can't";
I get the obvious error sorry, can't.
The code works on a normal connection,but on proxy it doesn't and even with the Hideman program, it still doesn't bypass that proxy. Which would be a better approach in this situation? Am I using the wrong module?
Note LWP::Simple:
The user agent created by this module will identify itself as "LWP::Simple/#.##" and will initialize its proxy defaults from the environment (by calling $ua->env_proxy).
Then, note env_proxy:
Load proxy settings from *_proxy environment variables.
So, set HTTPS_PROXY in the environment.

How can I tell if postgres has kerberos installed? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am trying to configure postgres (8.4.13) to work with Kerberos. I cannot seem to get it work. The one "gotcha" I keep reading about is that postgres must be built with kerberos support. Well, the postgres I have is an rpm downloaded from the internet. How I can tell if this postgres was built with Kerberos support or not? Is there a way to list "installed components? Thanks!!!
pg_config might be helpful, e.g.:
pg_config --configure
Some binaries have command like options which will allow you to see how they were compiled. I'm not sure if postgres will do that. You can use --version (but this is generally very minimalistic) and --describe-config which will dump values like:
Security and Authentication STRING FILE:/usr/local/etc/postgresql/krb5.keytab
Sets the location of the Kerberos server key file
if kerberos capabilites are configured for the postgresql installation.
As well many packaging systems have methods for capturing compile time options that were passed to the build process (FreeBSD's pkg info -f for example does this). It's been a while since I have used rpm and newer versions may have methods for this sort of query directly on the binary. On rpm based systems I have administered I would keep the src.rpm and .spec files on hand in a local repository for each installed application. This was in order to comply with out in house "policies" :-) and to track the configure and OPTFLAGS settings, source code used in the build, etc.
Here is a response to a similar question:
https://serverfault.com/questions/36037/how-can-i-find-what-options-an-rpm-was-compiled-with
A generic UNIX method for seeing which libraries a binary is linked against is to use "ldd" like this:
$~/ ldd /usr/local/bin/postgres
/usr/local/bin/postgres:
libgssapi.so.3 => /usr/local/lib/libgssapi.so.3 (0x800a38000)
libxml2.so.5 => /usr/local/lib/libxml2.so.5 (0x800b74000)
libpam.so.5 => /usr/lib/libpam.so.5 (0x800dc4000)
libssl.so.6 => /usr/lib/libssl.so.6 (0x800ecc000)
libcrypto.so.6 => /lib/libcrypto.so.6 (0x80101f000)
libm.so.5 => /lib/libm.so.5 (0x8012bf000)
libc.so.7 => /lib/libc.so.7 (0x8013df000)
libintl.so.9 => /usr/local/lib/libintl.so.9 (0x801621000)
libheimntlm.so.1 => /usr/local/lib/libheimntlm.so.1 (0x80172a000)
libkrb5.so.26 => /usr/local/lib/libkrb5.so.26 (0x801830000)
libheimbase.so.1 => /usr/local/lib/libheimbase.so.1 (0x8019ad000)
libhx509.so.5 => /usr/local/lib/libhx509.so.5 (0x801ab1000)
libwind.so.0 => /usr/local/lib/libwind.so.0 (0x801bf9000)
libsqlite3.so.8 => /usr/local/lib/libsqlite3.so.8 (0x801d21000)
libasn1.so.8 => /usr/local/lib/libasn1.so.8 (0x801ec3000)
libcom_err.so.2 => /usr/local/lib/libcom_err.so.2 (0x802058000)
libiconv.so.3 => /usr/local/lib/libiconv.so.3 (0x80215b000)
libroken.so.19 => /usr/local/lib/libroken.so.19 (0x802358000)
libcrypt.so.5 => /lib/libcrypt.so.5 (0x802469000)
libthr.so.3 => /lib/libthr.so.3 (0x802589000)
libz.so.5 => /lib/libz.so.5 (0x8026a2000)
As you can see on my system the postgresql binary is linked against libkrb5.so.26, libgssapi.so.3, libheimntlm.so.1 etc. (these are Heimdal Kerberos libraries).
[EDIT: I still think Milen's response is most likely the best, most thorough and recommended approach BUT one caveat I ran into just today: on most of my systems (most of these are FreeBSD) pg_config appears to be installed with the postgresql-client pkg and so can potentially have different options set than what is selected when the postgresql-server is built. I tend to build lots of functionality into the clients so they can connect to a range of servers which are often running on different machines. The package with the client command line shell and libraries (postgresql-devel in most RPM-based or Linux systems) is what give capabilities to database modules and libraries for python, perl, etc. that connect to your DB server. The client libraries often reside on a separate host when you have a web application that is grabbing and storing data (CRUD) in a database back end. That said, most likely binary client/server/devel packages are built with the same options set ;-)
Anyway, just another data point ... cheers.]
Hope that helps.
Cheers,