I have a work laptop that resides between a painful corporate proxy during the day. When I'm at home, I don't have to worry about a proxy unless I VPN in.
Is there a way to set-up an automatic github proxy, such that if I'm at work it'll use the corporate proxy, and if I'm at home, it'll remove proxy settings?
Or perhaps a way that attempt 1 is made with a proxy, and attempt 2 is made without?
Thanks for any suggestions!
You can set it manually through pre-defined scripts/functions, as described here:
See nwinkler/bash-it/plugins/available/proxy.plugin.bash
"When working from the office (where I have to use a proxy), I simply call enable_proxy, and when working from home, I call disable_proxy", as detailed here.
You could wrap this in a test, which tries a curl without and then with HTTP(S)_PROXY variable, in order to see which call is successful.
On Linux, that test could be part of your .bashrc, which would allow any new shell session to open itself with the right settings set or not.
Related
Following the doc from Rundeck, however the only button I have under "Sources tab" is "ResourceModelSource"
When I click that button I get a blank
PPS Issue happened on previous version - new to RunDeck, so I can't say that it EVER worked
I tried adding a manual resouces.xml in the project director y(Which I had to manually create, which tells me that's another issue) and reloading RD but that did not seem to work
While it's not the likely cause, I'll mention it here incase it IS relevant, I'm hosting on port 4440 however I'm using nginx to forward http (not https) requests on 443 to 4440, this is due to corp net sec policy.
I'm sure it's something where it's having an i/o issue on the local host, however I'm not seeing anything in the logs.
That is a known issue when you have Rundeck installed behind a proxy server, take a look at this: https://github.com/rundeck/rundeck/issues/6278 the solution is to set the grails.ServerURL (rundeck-config.properties file) with the exit URL defined for Rundeck in your proxy server (e.g: grails.serverURL=http://my_domain/rundeck), then restart the Rundeck service.
I've been chasing this problem around for a while now and I can't get to the bottom of it. I've read the other solutions on here (https://identityserver4.readthedocs.io and https://github.com/IdentityServer/IdentityServer4.Quickstart.UI) and it's still not working, so I've tried to reduce this down to the absolute basics. This is not the actual problem I am facing, but produces the very same outcome. i.e. I can't get Windows Authentication to work.
I clone https://github.com/IdentityServer/IdentityServer4.Samples
I amend Quickstarts/7_JavaScriptClient/src/QuickstartIdentityServer/Quickstart/Account/AccountController.cs so that WindowsAuthenticationEnabled is true
I then goto http://localhost:5000/account/login and attempt to use the Windows external provider and I get 401.
The only difference with this simple sample here, and what I see on my actual system is that I'm getting challenged for credentials on my real site.
Debugging the code I never see if(HttpContext.User is WindowsPrincipal) succeeding, because it's always a ClaimsPrincipal.
Can someone explain to me what I'm doing wrong?
Do you have windows authentication enabled on your IIS site? This needs to be enabled for your WindowsPrincipal to be assigned. Note that windows authentication only works when running behind IIS or IIS Express.
I am trying to add in Mod_carboncopy for Ejabberd, but not sure how to configure it in the web admin panel. Or would i have to config it in the config.yml instead??
I am not sure what is the expected behavior or how to config it, so i need some point to see where to start from.
1.) If i can config it in the web admin panel in the options for mod_carboncopy?
2.) Do i have to config it in config.yml
3.) After i config it do i have to reboot the server??
4.) After that is set up. Would i see the copy straight away in a RAW input for device 2, if device 1 sends out a message??
Thanks for your time!
I have just finished it. It is actually quite simple.
All you have to do is make sure your ejabberd is 15.03 or later, which has the carboncopy on by default. Then if you are using strophe, it is very simple to write the plugin. All you have to do is to enable it through IQ. Then everythings good to go!!
I use gsutil tool for download archives from Google Storage.
I use next CMD command:
python c:\gsutil\gsutil cp gs://pubsite_prod_rev_XXXXXXXXXXXXX/YYYYY/*.zip C:\Tmp\gs
Everything works fine, but if I try to run that command from corporate proxy, I receive error:
Caught socket error, retrying: [Errno 10051] A socket operation was attempted to an unreachable network
I tried several times to set the proxy settings in .boto file, but all to no avail.
Someone faced with such a problem?
Thanks!
Please see the section "I'm connecting through a proxy server, what do I need to do?" at https://developers.google.com/storage/docs/faq#troubleshooting
Basically, you need to configure the proxy settings in your .boto file, and you need to ensure that your proxy allows traffic to accounts.google.com as well as to *.storage.googleapis.com.
A change was just merged into github yesterday that fixes some of the proxy support. Please try it out, or specifically, overwrite this file with your current copy:
https://github.com/GoogleCloudPlatform/gsutil/blob/master/gslib/util.py
I believe I am having the same problem with the proxy settings being ignored under Linux (Ubuntu 12.04.4 LTS) and gsutils 4.2 (downloaded today).
I've been watching tcpdump on the host to confirm that gsutils is attempting to directly route to Google IPs instead of to my proxy server.
It seems that on the first execution of a simple command like "gsutil -d ls" it will use my proxy settings specified .boto for the first POST and then switch back to attempting to route directly to Google instead of my proxy server.
Then if I CTRL-C and re-run the exact same command, the proxy setting is no longer used at all. This difference in behaviour baffles me. If I wait long enough, I think it will work for the initial request again so this suggests some form on caching taking place. I'm not 100% of this behaviour yet because I haven't been able to predict when it occurs.
I also noticed that it always first tries to connect to 169.254.169.254 on port 80 regardless of proxy settings. A grep shows that it's hardcoded into oauth2_client.py, test_utils.py, layer1.py, and utils.py (under different subdirectories of the gsutils root).
I've tried setting the http_proxy environment variable but it appears that there is code that unsets this.
I am currently trying to setup a redirect on write for an installation of OpenLdap 2.2.
I have two instances running. One is configured to be read-only (only read access, database specified as read-only) and has redirect configured to point to the second instance. The second instance is configured to allow for the desired write permissions.
When I attempt a modify on the first instance it fails as expected but does not send back the referral. Am I missing a piece of the configuration? Am I even on the right path? Any guidance would be greatly appreciated. Thanks.
In the database section of you slapd.conf do you add the redirection like this ? :
updateref "ldap://master-host:port/"
So, it turns out the best way to do this is to go ahead and set up replication using slurpd and point all requests at the slave instance. Unfortunately you can't set up the master and slave on the same host (for obvious reasons, but still), so I had to spin up a second VM to get this going.
Honestly, if I was not trying to replicate a redirect problem it wouldn't be worth it, but I have to duplicate a production issue.
For more information on slapd and specifically slurpd, the OpenLDAP documentation is actually crazy helpful: slurpd config for OpenLDAP 2.2