Setting up load-balancer based on authenticated users - deployment

I'm trying to set up a loadbalancer that would redirect to specific version of an application certein users. So far i was using Blue/Green deployment strategy (so once i made new version of an app i created new environment and redirected traffic there). Now i would like to change this approach. I want to be able to specify users (more experienced or whatever) that would see new site after authentication while the others would still be redirected to old one. If something goes wrong with new version all users will see old version. Currently my loadbalancing is made in apache and authentication is done on application level. So is this even possible? I know i could hardcode it in application but what if there is a bug in new feature and new users are still being redirected there? I would then need to stop application for all users and rollback to old version and that's bad i guess. I was thinking about using external CAS however didnt find any information if it would be possible then. So i would like to ask is it possible and are there any tools (maybe some apache plugin) for that purpose?

Here's a working solution with nginx
create conf.d/balancer.conf
put the code into it (see below)
docker run -p8080:8080 -v ~/your_path/conf.d:/etc/nginx/conf.d openresty/openresty:alpine
use curl to play with it
balancer.conf:
map $cookie_is_special_user $upstream {
default http://example.com;
~^1$ http://scooterlabs.com/echo;
}
server {
listen 8080;
resolver 8.8.8.8;
location / {
proxy_pass $upstream;
}
}
testing
curl --cookie "is_special_user=1" http://localhost:8080
It would return the contents of scooterlabs.com dumping the request it receives
curl http://localhost:8080
Produces the contents of example.com
explanation
the idea is that you set a special cookie to the users you treat as special by the backend app after they get authorized as usual
of course it would only work if both app versions are served on the same domain so that the cookie is seen by both versions
after that you balance them to a desired server depending on the cookie value
you can easily disable such routing by tweaking your nginx config file
with this approach you can come up with even more complex scenarios like setting random cookie values in the range 1-10 and then gradually switching some of the special users in your config file i.e. start with those having value 1, after that 1-2 etc

Related

Getting Started With PeerJS

I am trying the simplest example I can, pulled directly from their website. Here is my entire html file, with code taken exactly from https://peerjs.com/index.html:
<script src="https://unpkg.com/peerjs#1.3.1/dist/peerjs.min.js"></script>
<script>
var peer = new Peer();
var conn = peer.connect('another-peers-id');
// on open will be launch when you successfully connect to PeerServer
conn.on('open', function(){
// here you have conn.id
conn.send('hi!');
});
</script>
In Chrome and Edge I get this in the console:
peerjs.min.js:64 GET https://0.peerjs.com/peerjs/id?ts=15956160926060.016464029424720694 net::ERR_CONNECTION_REFUSED
In Firefox I get this:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://0.peerjs.com/peerjs/id?ts=15956162489620.8436734374800061. (Reason: CORS request did not succeed).
What am I doing wrong?
#reyad has requested "a full trace of requests and responses". Here's what I see in my network tab in Firefox:
And here's Chrome:
And a tiny bit more Chrome:
[Note: It would have been better if you could provide a full trace of requests and responses. This problem may occur for several reasons. I'll state two solutions. So, try those. If those doesn't work, provide full trace of requests and responses.]
1. First Solution:
Sometimes, this type of error occurs because of self-signed certificate. To solve this problem, open developer tools/options, then go to network tab. You'll see a list of requests. Select the request which was failed because of CORS(i.e. which gave you this Reason: CORS request did not succeed). Open it(i.e. click it). If your problem is related to cert you'll see the following error message:
AN ERROR OCCURED: SEC_ERROR_INADEQUATE_KEY_USAGE
To solve this problem, go to url that is the reason of this problem and accept the certificate manually.
2. Second solution:
Check the request(which is the reason of CORS) in the network tab of developers tools/options(same as described in 1. First Solution). You'll find a Transferred column. See, what's written in the Transferred column of the failed request. If it is written Blocked By Some Ad-Blocker, then disable the Ad-Blocker. Your request will work fine.
[P.S.]: These solutions are proposed on assumptions. Hope these works. If these two do not work, then please provide more info about requests and responses. And also check this.
3. Third and final solution:
[Note: This solution may not solve your problem directly, but it'll give you alternative solution and also insight about what your problem is and how to work around it]
Before reading the solution below, read this to understand how Access-Control-Allow-Origin works(it is the reason for CORS error).
Let me first explain how peerjs works:
PEERJS works based on PEER ID. So, you've to get some PEER ID either from the PEERJS CLOUD SERVER or you've to provide yourself one in the PEER CONSTRUCTOR i.e. new Peer("some-peer-id"). Peer id has to be unique, cause its necessary to detect all the users uniquely. And, peerjs uses this PEER ID to send and receive data from user to user.
Now, you should know that, you're using PEERJS CLOUD SERVER to get/generate unique peer id which is the default server PEERJS uses unless you specified some other server to use.
Now let me explain why you're facing this problem:
As you already know how CORS works, you may have already guessed, that https://unpkg.com/peerjs#1.3.1/dist/peerjs.min.js(the downloaded js file) is calling https://0.peerjs.com to retrieve/generate new unique PEER ID. But, this request by https://your.website.com does not have Access-Control-Allow-Origin access for some reason, it may also be a middleware problem. So, its difficult to tell where the problem is actually occuring. But one thing for sure, it's not your fault of writing code :D.
I hope all the concepts is clear to you I've stated above.
Now, to solutions:
Alternative-appraoch-1 (Using PEERJS CLOUD SERVER AND Your own provided id):
In this approach you've to generate your own unique PEER ID. So, "https://your.website.com" does not have to call "https://0.peerjs.com" for unique peer id. [Note: make your peer id large enough so that its always unique, at least 64 chars long]
In this way, you can avoid the CORS problem.
Update:
I just saw an new issue in github, which says the public peerjs cloud server is now unstable or does not work properly. It just gives error like: Firefox cannot establish a connection with the server at the address wss://0.peerjs.com/peerjs?key=peerjs&id=123222589562487856955685485555&token=ocyxworx62i and in Chrome: Error in connection establishment: net::ERR_CONNECTION_REFUSED. For details check here. So, its better, you use your own server(see the next approach).
Alternative-appraoch-2 (Using your own peerjs server):
You can host your own peerjs server instead of PEERJS CLOUD SERVER. In this way, you can allow access to anyone/any website you want. If you want know how to host a peerjs server, you may visit here.
[P.S.]: I have studied pearjs issues in github. After reading all those issues, it seems, it is better to use your own server rather than using pearjs cloud. There are a lot of various problems with each new release of peerjs. And mostly related with connection with peerjs cloud and also peerjs cloud is not stable I guess. They were hosting it in 0.peerjs.com:9000 before(not secure). But now in 0.peerjs.com:443.
I haven't use peerjs before nor set up peerjs server. If you want to set up one, I hope the community would be able help you on how to do that properly.
What I understand from your question is that there is an issue of (CORS => Cross-origin resource sharing ), Maybe what I am suggesting is not very intuitive.
First : download the "https://unpkg.com/peerjs#1.3.1/dist/peerjs.min.js" in your local directory . and then incklude the local javascript code to the html.
like: <script src="./peerjs.min.js"></script>
Second :
you are using var peer = new Peer();
but please provide an extra unique id from your side. for example, I just created a random id and provided it.
StackOverflow link: https://stackoverflow.com/questions/21216758/peerjs-set-your-own-peerid#:~:text=1%20Answer&text=Provide%20a%20peer%20id%20when,to%20under%20Create%20a%20peer.
var a_random_id = Math.random().toString(36).replace(/[^a-z]+/g, '').substr(2, 10);
var peer = new Peer(a_random_id, {key: 'myapikey'});
Third : the best option is to run PeerServer: A server for PeerJS of your own.
If you don't want to develop anything, just enter a few commands below.
Install the package globally:
$ npm install peer -g
Run the server:
$ peerjs --port 9000 --key peerjs --path /myapp
Started PeerServer on ::, port: 9000, path: /myapp (v. 0.3.2)
Check it: http://127.0.0.1:9000/myapp It should return JSON with name, description, and website fields.
details:https://github.com/peers/peerjs-server

How do I Re-route Ghost Blog Admin URL without modifying the API Address?

Ghost blog platform has a setting that allows you to change the admin panel login location (which starts as: https://whateveryoursiteis.com/ghost). Methodology / docs for changing that setting can be found here: https://ghost.org/docs/config/#admin-url
However — when using the above methodology the API Url that is used for Search etc etc is ALSO modified meaning all requests to the ghost API will also be forwarded to the alternate domain (not just the admin access).
My question is — what is the best way to achieve a redirect of the admin URL to a different Domain / protocol while allowing the API url used by Ghost to remain the same?
More background.
We are running ghost on top of GKE (Google Kubernetes Engine) on a Multi-Region Ingress which allows us to dump our CloudSQL DB down to a SQLite file and then build that database into our production Docker Containers which are then deployed to the different Kubernetes nodes that are fronted by the GCE-Ingress load balancer.
Since we need to rebuild that database / container on content change (not just on code change) we need to have a separate Admin URL backed by Cloud SQL where we can persist / modify our data which then triggers the rebuild on our Ci pipeline via Ghost Webhooks.
Another related question might be:
Is it possible to use standard ghost redirects (created via: https://docs.ghost.org/concepts/redirects/) to redirect the admin panel URL (ie. https://whateveryoursiteis.com/ghost) to a different domain (ie. https://youradminsite.com/ghost)?
Another Related GKE / GCE-Ingress Question:
Is it possible to create 301 redirects natively using Kuberentes GCE-Ingress on GKE without adding an nGinx container etc?
That will be my first attempt after posting this — but I figured either way maybe it helps another ghost platform fan down the line someplace — I will attempt to respond back as I find answers to those questions (assuming someone doesn't beat me to it!).
Regarding your question if it's possible to create 301 redirects without adding a nginx container, I can suggest to use istio, find out more information about traffic routing here.
OK. So as it turns out the Ghost team currently has things setup to point API connections at the Admin URL. So if you change your Admin URL expect your clients to attempt to connect to that URL.
I am going to be raising the potential of splitting these off as a feature request over on the ghost forums (as soon as I get out from under pre-launch hell on the current project).
Here's the official Ghost response:
What is referred as 'official docker image' is not something that we
as a Ghost team support.
The APIs are indeed hosted under the same URL as the admin and that's
by design and not really a bug. Introducing configuration options for
each API Ghost instance hosts would be a feature and should be
discussed at our forum first 👍 I think it's a nice idea to be able to
serve APIs from different host, but it's not something that is within
our priorities at the moment.
In case you need more granular handling of admin site, you could
introduce those on your proxy level and for example, handle requests
that are coming to /ghost/api with a different set of rules.
See the full discussion over here on the TryGhost GitHub:
https://github.com/TryGhost/Ghost/issues/10441#issuecomment-460378033
I haven't looked into what it would take to implement the feature but the suggestion on proxying the request could work... if only I didn't need to run on GKE Multi region (which requires use of GCE-Ingress which doesn't have support for redirection hah!). This would be relatively easy to solve the nGinx ingress.
Hopefully this helps someone — I will update as I work through the process. As of now I solved it by dumping my GCP CloudSQL database down to a SQLite db file during build time (thereby allowing me to keep my admin instance clean and separate from the API endpoint — which for me remains the same URL).

keycloak Invalid parameter: redirect_uri

When I am trying to hit from my api to authenticate user from keycloak, but its giving me error Invalid parameter: redirect_uri on keycloak page. I have created my own realm apart from master. keycloak is running on http. Please help me.
What worked for me was adding wildchar '*'. Although for production builds, I am going to be more specific with the value of this field. But for dev purposes you can do this.
Setting available under, keycloak admin console -> Realm_Name -> Cients -> Client_Name.
EDIT: DO NOT DO THIS IN PRODUCTION. Doing so creates a large security flaw.
If you are a .Net Devloper Please Check below Configurations
keycloakAuthentication options class set
CallbackPath = RedirectUri,//this Property needs to be set other wise it will show invalid redirecturi error
I faced the same error. In my case, the issue was with Valid Redirect URIs was not correct. So these are the steps I followed.
First login to keycloack as an admin user.
Then Select your realm(maybe you will auto-direct to the realm). Then you will see below screen
Select Clients from left panel.
Then select relevant client which you configured for your app.
By default, you will be Setting tab, if not select it.
My app was running on port 3000, so my correct setting is like below.
let say you have an app runs on localhost:3000, so your setting should be like this
If you're getting this error because of a new realm you created
You can directly change the URL in the URL bar to get past this error. In the URL that you are redirected to (you may have to look in Chrome dev tools for this URL), change the realm from master to the one you just created, and if you are not using https, then make sure the redirect_uri is also using http.
If you're getting this error because you're trying to setup Keycloak on a public facing domain (not localhost)
Step 1)
Follow this documentation to setup a MySql database (link's broken. If you find some good alternative documentation that works for you, feel free to update this link and remove this message). You may also need to refer to this documentation.
Step 2)
Run the command update REALM set ssl_required = 'NONE' where id = 'master';
Note:
At this point, you should technically be able to login, but version 4.0 of Keycloak is using https for the redirect uri even though we just turned off https support. Until Keycloak fixes this, we can get around this with a reverse proxy. A reverse proxy is something we will want to use anyhow to easily create SSL/TLS certificates without having to worry about Java keystores.
Note 2: After writing these instructions, Keycloak come out with their own proxy. They then stopped supporting it and recommended using oauth2 proxy instead. It is lacking some features the Keycloak proxy had, and an unoffical version of that proxy is still being maintained here. I haven't tried using either of these proxies, but at this point, you might want to stop following my directions and use one of those instead.
Step 3) Install Apache. We will use Apache as a reverse proxy (I tried NGINX, but NGINX had some limitations that got in the way). See yum installing Apache (CentOs 7), and apt-get install Apache (Ubuntu 16), or find instructions for your specific distro.
Step 4) Run Apache
Use sudo systemctl start httpd (CentOs) or sudo systemctl start apache2 (Ubuntu)
Use sudo systemctl status httpd (CentOs) or sudo systemctl status apache2
(Ubuntu) to check if Apache is running. If you see in green text the words active (running) or if the last entry reads Started The Apache HTTP Server. then you're good.
Step 5) We will establish a SSL connection with the reverse proxy, and then the reverse proxy will communicate to keyCloak over http. Because this http communication is happening on the same machine, you're still secure. We can use Certbot to setup auto-renewing certificates.
If this type of encryption is not good enough, and your security policy requires end-to-end encryption, you will have to figure out how to setup SSL through WildFly, instead of using a reverse proxy.
Note:
I was never actually able to get https to work properly with the admin portal. Perhaps this may have just been a bug in the beta version of Keycloak 4.0 that I'm using. You're suppose to be able to set the SSL level to only require it for external requests, but this did not seem to work, which is why we set https to none in step #2. From here on we will continue to use http over an SSH tunnel to manage the admin settings.
Step 6)
Whenever you try to visit the site via https, you will trigger an HSTS policy which will auto-force http requests to redirect to https. Follow these instructions to clear the HSTS rule from Chrome, and then for the time being, do not visit the https version of the site again.
Step 7)
Configure Apache. Add the virtual host config in the code block below. If you've never done this, then the first thing you'll need to do is figure out where to add this config file.
On RHEL and some other distros
you'll need to find where your httpd.conf or apache2.conf file is located. That config file should be loading virtual host config files from another folder such as conf.d.
If you are using Ubuntu or Debian,
your config files will be located in /etc/apache2/sites-available/ and you'll have an extra step of needing to enable them by running the command sudo a2ensite name-of-your-conf-file.conf. That'll create a symlink in /etc/apache2/sites-enabled/ which is where Apache looks for config files on Ubuntu/Debian (and remember the config file was placed in sites-available, slightly different).
All distros
Once you've found the config files, change out, or add, the following virtual host entries in your config files. Make sure you don't override the already present SSL options that where generated by certbot. When done, your config file should look something like this.
<VirtualHost *:80>
RewriteEngine on
#change https redirect_uri parameters to http
RewriteCond %{request_uri}\?%{query_string} ^(.*)redirect_uri=https(.*)$
RewriteRule . %1redirect_uri=http%2 [NE,R=302]
#uncomment to force https
#does not currently work
#RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI}
#forward the requests on to keycloak
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/
</VirtualHost>
<IfModule mod_ssl.c>
<VirtualHost *:443>
RewriteEngine on
#Disable HSTS
Header set Strict-Transport-Security "max-age=0; includeSubDomains;" env=HTTPS
#change https redirect_uri parameters to http
RewriteCond %{request_uri}\?%{query_string} ^(.*)redirect_uri=https(.*)$
RewriteRule . %1redirect_uri=http%2 [NE,R=302]
#forward the requests on to keycloak
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/
#Leave the items added by certbot alone
#There should be a ServerName option
#And a bunch of options to configure the location of the SSL cert files
#Along with an option to include an additional config file
</VirtualHost>
</IfModule>
Step 8) Restart Apache. Use sudo systemctl restart httpd (CentOs) or sudo systemctl restart apache2 (Ubuntu).
Step 9)
Before you have a chance to try to login to the server, since we told Keycloak to use http, we need to setup another method of connecting securely. This can be done by either installing a VPN service on the keycloak server, or by using SOCKS. I used a SOCKS proxy. In order to do this, you'll first need to setup dynamic port forwarding.
ssh -N -D 9905 user#example.com
Or set it up via Putty.
All traffic sent to port 9905 will now be securely routed through an SSH tunnel to your server. Make sure you whitelist port 9905 on your server's firewall.
Once you have dynamic port forwarding setup, you will need to setup your browser to use a SOCKS proxy on port 9905. Instructions here.
Step 10) You should now be able to login to the Keycloak admin portal. To connect to the website go to http://127.0.0.1, and the SOCKS proxy will take you to the admin console. Make sure you turn off the SOCKS proxy when you're done as it does utilize your server's resources, and will result in a slower internet speed for you if kept on.
Step 11) Don't ask me how long it took me to figure all of this out.
IMPORTANT UPDATE IN KEYCLOAK 18
In the newest keycloak 18, they have deprecated the redirect_uri variable for the openid connect logout -> https://www.keycloak.org/docs/latest/upgrading/index.html#openid-connect-logout
Go to keycloak admin console > SpringBootKeycloak> Cients>login-app page.
Here in valid-redirect uris section add
http://localhost:8080/sso/login
This will help resolve indirect-uri problem
For me, I had a missing trailing slash / in the value for Valid Redirect URIs
[For Keycloak version 18 or Higher]
None of the mentioned solutions should be working if you are using Keycloak 18 or a higher version.
According to the version 18 release note. Keycloak does not support logout with redirect_uri anymore. you need to include post_logout_redirect_uri and id_token_hint as parameters.
Please check the answer of this question for more information.
keycloak: using react user can login but when I try logout I get a message "Invalid parameter: redirect_uri"
Log in the Keycloak admin console website, select the realm and its client, then make sure all URIs of the client are prefixed with the protocol, that is, with http:// for example. An example would be http://localhost:8082/*
Another way to solve the issue, is to view the Keycloak server console output, locate the line stating the request was refused, copy from it the redirect_uri displayed value and paste it in the * Valid Redirect URIs field of the client in the Keycloak admin console website. The requested URI is then one of the acceptables.
If you're seeing this problem after you've made a modification to the Keycloak context path, you'll need to make an additional change to a redirect url setting:
Change <web-context>yourchange/auth</web-context> back to
<web-context>auth</web-context> in standalone.xml
Restart Keycloak and navigate to the login page (/auth/admin)
Log in and select the "Master" realm
Select "Clients" from the side menu
Select the "security-admin-console" client from the list that appears
Change the "Valid Redirect URIs" from /auth/admin/master/console/* to
/yourchange/auth/admin/master/console/*
Save and sign out. You'll again see the "Invalid redirect url" message after signing out.
Now, put in your original change
<web-context>yourchange/auth</web-context> in standalone.xml
Restart Keycloak and navigate to the login page (which is now
/yourchange/auth/admin)
Log in and enjoy
I faced the same issue. I rectified it by going to the particular client under the realm respectively therein redirect URL add * after your complete URL.
I had the same problem with "localhost" in the redirect URL. Change to 127.0.0.1 in the "Valid Redirect URIs" field of clients config (KeyCloak web admin console). It works for me.
It seems that this problem can occur if you put whitespace in your Realm name. I had name set to Debugging Realm and I got this error. When I changed to DebuggingRealm it worked.
You can still have whitespace in the display name. Odd that keycloak doesn't check for this on admin input.
even I faced the same issue. I rectified it by going to the particular client under the realm respectively therein redirect URL add * after your complete URL.
THE PROBLEM WILL BE SOLVED
Example:
redirect URI: http:localhost:3000/myapp/generator/*
Looking at the exact rewrite was key for me. the wellKnownUrl lookup was returning "http://127.0.01:7070/" and I had specified "http://localhost:7070" ;-)
I faced the Invalid parameter: redirect_uri problem problem while following spring boot and keycloak example available at http://www.baeldung.com/spring-boot-keycloak. when adding the client from the keycloak server we have to provide the redirect URI for that client so that keycloak server can perform the redirection.
When I faced the same error multiple times, I followed copying correct URL from keycloak server console and provided in the valid Redirect URIs space and it worked fine!
This error is also thrown when your User does not have the expected Role delegated in User definition(Set role for the Realm in drop down).
Your redirect URI in your code(keycloak.init) should be the same as the redirect URI set on Keycloak server (client -> Valid Uri)
Ran into this problem too. After two days of pulling my hair out I discovered that the URLs in Keycloak are case sensitive. However the browser coverts the URL to lowercase, which means that uppercase URLs in Keycloak will never work.
e.g. my server name is MYSERVER (hostname returns MYSERVER)
Keycloak URLs are https://MYSERVER:8080/*
Browse to https://myserver:8080 -> fails invalid_url
Browse to https://MYSERVER:8080 -> fails invalid_url
Change Keycloak URLs to https://myserver:8080/*
Browse to https://myserver:8080 -> works
Browse to https://MYSERVER:8080 -> works
We also saw this, but only on certain URLs. After seeing this clue, I realized that the Java URI constructor has to be able to decode it,
like so URI uri = URI.create(redirectUri);
We had a { and } in our URLs which normally worked fine, but when going through two layers of URL decode/encode, Java decided the { and } were invalid.
We'll be changing our curly braces to something else to get around the double encode/decode issue.
I know other people provided the same answer, but my reputation was not high enough to upvote them. In the redirect menu, Mine had a redirect of " 0.0.0.0:8080/* ". I added
(actualIP) followed by :8080/* and it worked.
In your client, set the origin of your request. In my case, localhost:3000 (javaScript client)
If you are using the Authorization Code Flow then the response_type query param must be equal to code. See https://www.keycloak.org/docs/3.3/server_admin/topics/sso-protocols/oidc.html
You need to check the keycloak admin console for fronted configuration. It must be wrongly configured for redirect url and web origins.
If you're trying to redirect to the keycloak login page after logout (as I was), that is not allowed by default but also needs to be configured in the "Valid Redirect URIs" setting in the admin console of your client.
Check that the value of the redirect_uri parameter is whitelisted for the client that you are using. You can manage the configuration of the client via the admin console.
The redirect uri should match exactly with one of the whitelisted redirect uri's, or you can use a wildcard at the end of the uri you want to whitelist. See: https://www.keycloak.org/docs/latest/server_admin/#_clients
Note that using wildcards to whitelist redirect uri's is allowed by Keycloak, but is actually a violation of the OpenId Connect specification. See the discussion on this at https://lists.jboss.org/pipermail/keycloak-dev/2018-December/011440.html
My issue was caused by the wrong client_id (OPENID_CLIENT_ID) I had defined in the deployment.yaml. Make sure this field is assigned with the one in Keycloak client id.
The problem seems related to an invalid value in Valid Redirect URIs field. You can try with one of these tips:
set the same value of Client ID (if it's a URL) making it end with /* , or
tryingToLearn 's reponse [https://stackoverflow.com/a/51420355/97799] (but beware of security issues).
I into this due to a malformed redirect url in the keycloak client:
https://http://192.168.1.10/hub/oauth_callback
As soon as I took out https:// the error
I'm using version 20.0.2 and, for me, the solution was to simply add a '+' in the "Valid post logout redirect URIs" field:
As stated in the help balloon, "A value of '+' will use the list of valid redirect uris".
I faced a similar issue because I create a realm with two words and had a space on it. eg Test Realm, this gave me this error. I put an underscore and was good to go eg, Test_Realm.

Proxy URL 'incache....com:8080' does not contain a valid hostname

Recently I was forced to switch from SVN to TFS.
I'm trying to get this working with TEE on our RedHat box.
Any action seems to end with something like this:
user#rh: tf -map $/XX/XX . -workspace:app-job -server:http://tfs.domain.com:8080/tfs/TFS2008/ -profile:TFS1_PRF_C
Password:
An error occurred: Proxy URL 'incache.domain.com:8080' does not contain a valid hostname.
Could someone help with that?
Your question is a little vague about what you expect to happen here (are you supposed to be using an HTTP proxy to access your TFS server? Or is the problem that it's assuming your HTTP proxy?)
I'm going to assume that you do not need to use an HTTP proxy to access your internal TFS server, since in most corporate environments your proxy is used to get outside the network, not inside. By default, the Team Explorer Everywhere CLC does try to use your system HTTP proxy, however this is configurable in your connection profile.
In order to override your default system HTTP proxy for that profile, you can set the profile property httpProxyIgnoreGlobal to true:
tf profile -edit -boolean:httpProxyIgnoreGlobal=true TFS1_PRF_C

Protecting click once web deployed installations

I have a link on my website to the standard publish page generated by Visual Studio. My concern is that if anybody finds out the URL to that page, they can download my software. Sure, I could password protect the page with the link, but it still would not be protecting the download URL. Are there any ways to secure the click once upload? I have looked around, and it seems like I am stuck in this sense.
Public URL is a security issue in ClickOnce Deployment. However, there is a solution for your problem if your web server has windows and .NET installed. Tell me if you have one ? I will have to come up with another workaround for Linux web server in case you have that.
Brief
Firstly, a bit of information about ClickOnce deployment. When you deploy the application, the GET requests on the server made are (assuming WebDir is the publish directory on the server)
G-1. GET /WebDir/setup.exe (Initial download)
G-2. GET /WebDir/MyApp.Application (setup.exe -url request)
G-3. GET /WebDir/MyApp.Application (.application deployment provider URL request)
G-4. GET /WebDir/Application Files/MyApp_1_0_0_0/MyApp.exe.manifest (Application manifest request)
G-5. GET /WebDir/Application Files/MyApp_1_0_0_0/MyApp.exe.deployand other .deploy files ... (Application file requests)
Implementation
Now, the solution is to intercept these file requests on the server. On IIS, you can attach a custom HTTPHandler and handle the request. On Apache, you can redirect requests to a PHP code using .htaccess files. Apart from this, you will have to generate unique identifier uid for client instances downloaded from the server (can be your license key) and put that in the deployment provider URL query parameters.
Directory Structure
Create an "Application" folder inside your WebDir and restrict access to /WebDir/Application/. Rest everything can be there inside /WebDir/
File Requests
So here's what you do on a Apache web server hosted on a windows machine:
Create a custom download page or use the one created from publishing the application using Visual Studio (but you will have to edit it manually!). Let's assume that page is /WebDir/Download.php
After authenticating user from Download.php, you have to send setup.exe from your code (can do it with readfile() in PHP) to the user. However, the catch is bootstrapper (setup.exe) after installing will do a GET request [G-2]. Don't forget now, that you have to validate this file request. So basically you change the "setup.exe -url" property to include uid before returning the file. For eg: change it to /WebDir/uid/MyApp.Application [G-2]. You can use MsiStuff.exe to change the URL property for the bootstrapper.
Using a .htaccess file, rewrite [G-2] to /WebDir/Handler.php?user=uid. From Handler.php, you can check if it is a valid uid. If it is valid, you will have to include the uid in the deployment provider URL and "Dependent Assemblies Path" in deployment manifest so that if an upgrade request comes (It essentially requests the deployment manifest), you can validate the user there too. Add uid to query string parameters. For eg: change it to /WebDir/MyApp.application?user=uid [G-3]. Don't forget that you will have to resign the manifests once you modify them. Use Mage or write your own code to do that.
So finally, the GET requests on the server will be (assuming uid=1f3rd)
G-1. GET /WebDir/Download.phpAction: return setup.exe with the -url changed
G-2. GET /WebDir/Application/setup.exe/1f3rd/MyApp.ApplicationAction: redirect, validate user, change URL, re-sign and return file
G-3. GET /WebDir/Application/setup.exe/MyApp.Application?user=1f3rdAction: redirect, validate user and return file
G-4. GET /WebDir/Application/1f3rd/Application Files/MyApp_1_0_0_0/MyApp.exe.manifestAction: redirect, validate user and return file
G-5. GET /WebDir/Application/1f3rd/Application Files/MyApp_1_0_0_0/MyApp.exe.deployand other .deploy files ...Action: redirect, validate user and return file
Pros
Application is successfully deployed and upgraded only if all the requests have a valid uid in the URL present.
You can now identify different instances of application on client systems. You can track the update history, do a selective version upgrade/downgrade and much more !
Cons
You will need a windows server to implement the above since you need mage.exe | your-own-.NET-code-signing-application and Msistuff.exe.
You may have minor performance issues since you are performing validation on every file request. You can choose to skip validation on .manifest and .deploy file requests.
You will have to ensure proper security for companies certificate which will be present on the web server for signing (You can store it on the server local file-system if you have the full server to yourself. In that case, it is fine unless somebody breaks into machine itself !)
If you want me to make something clear or explain in detail, feel free to ask. In case you have suggestions for modification to the above, post that too.
I will write a detailed CodeProject article if I have spare time someday.