Why does new FileStream get access violation in REST service when path has IP Address? - rest

Why does this get an access violation:
using fs := new FileStream( fullFilename, FileMode.Open, System.IO.FileAccess.Read )
when the fullfilename is like "\\52.1.1.1\d$\temp\file.bmp" and EVERYONE has access to the folder?
If the file is just "d:\temp\file.bmp", the FileStream can read it. There is something about the IP address part.
The language is Oxygene, but I'm not sure why that would make a difference. But it IS .Net. FWIW.

Guessing a lot here, but:
I guess your REST service is running in a webserver, like IIS? By default, Windows services run as the LocalService account. This "presents anonymous credentials on the network".
"EVERYONE has access to the folder", you say, but "Contrary to popular belief, anyone who is logged in anonymously—that is, they did not authenticate—will NOT be included in the EVERYONE group.".
(and why does EVERYONE have access to d$ - an administrative share, anyway?).
Your service has no access, and you need to make it pass some credentials to the share explicitly, or run in an IIS application pool as a credentialed account which can access the share.

Related

What is the "[full path]" component of the SSL Certificate Authority given by MySQL and PostgreSQL (boto3) calls in the AWS docs?

In the AWS documentation for "Connecting to your DB instance using IAM authentication and the AWS SDK for Python (Boto3)", the following call is made to both psycopg2.connect (shown) and mysql.connector.connect:
conn = psycopg2.connect(host=ENDPOINT, port=PORT, database=DBNAME, user=USR, password=token, sslmode='prefer', sslrootcert="[full path]rds-combined-ca-bundle.pem")
cur = conn.cursor()
cur.execute("""SELECT now()""")
query_results = cur.fetchall()
print(query_results)
I see some discussion about the ssl_ca path (here and here) and what those bundles are used for. But none of the three links I've given here describe the [full path] component given by the AWS docs, or where it is pointing to. My current guess (from the second link) is this URL, but I'd like to be sure.
Additionally, what are the advantages to having this bundle downloaded to the remote EC2 on which these Python 3 (boto3) scripts are running?
EDIT: By the way, the above call to psycopg2.connect is working in Jupyter with Python 3.9.5 on an EC2 currently, with the [full path] written as-is...
You should replace the '[full path]' with the filesystem path (directory path) to where you saved the pem file when you downloaded it (from that last URL you gave) to the local computer.
The advantage of using it is that your client will verify it connected to the correct database, and not some malicious system which is intercepting your traffic. I don't how advantageous you consider this: if someone has compromised Amazon enough to be intercepting their internal traffic, they might also have compromised their CA as well. But there is at least some possibility they did one without the other.
Your code as shown does not work for me, because ssl_ca is not how it is spelled. Assuming you used the code actually given at your first link for PostgreSQL:
sslmode='prefer', sslrootcert="[full path]rds-combined-ca-bundle.pem"
Then the reason it works despite the bogus path is that "prefer" means it doesn't care if the rootcert is missing, it just skips validating in that case. If you change it to 'verify-full', then presumably it would stop working.

How do I Re-route Ghost Blog Admin URL without modifying the API Address?

Ghost blog platform has a setting that allows you to change the admin panel login location (which starts as: https://whateveryoursiteis.com/ghost). Methodology / docs for changing that setting can be found here: https://ghost.org/docs/config/#admin-url
However — when using the above methodology the API Url that is used for Search etc etc is ALSO modified meaning all requests to the ghost API will also be forwarded to the alternate domain (not just the admin access).
My question is — what is the best way to achieve a redirect of the admin URL to a different Domain / protocol while allowing the API url used by Ghost to remain the same?
More background.
We are running ghost on top of GKE (Google Kubernetes Engine) on a Multi-Region Ingress which allows us to dump our CloudSQL DB down to a SQLite file and then build that database into our production Docker Containers which are then deployed to the different Kubernetes nodes that are fronted by the GCE-Ingress load balancer.
Since we need to rebuild that database / container on content change (not just on code change) we need to have a separate Admin URL backed by Cloud SQL where we can persist / modify our data which then triggers the rebuild on our Ci pipeline via Ghost Webhooks.
Another related question might be:
Is it possible to use standard ghost redirects (created via: https://docs.ghost.org/concepts/redirects/) to redirect the admin panel URL (ie. https://whateveryoursiteis.com/ghost) to a different domain (ie. https://youradminsite.com/ghost)?
Another Related GKE / GCE-Ingress Question:
Is it possible to create 301 redirects natively using Kuberentes GCE-Ingress on GKE without adding an nGinx container etc?
That will be my first attempt after posting this — but I figured either way maybe it helps another ghost platform fan down the line someplace — I will attempt to respond back as I find answers to those questions (assuming someone doesn't beat me to it!).
Regarding your question if it's possible to create 301 redirects without adding a nginx container, I can suggest to use istio, find out more information about traffic routing here.
OK. So as it turns out the Ghost team currently has things setup to point API connections at the Admin URL. So if you change your Admin URL expect your clients to attempt to connect to that URL.
I am going to be raising the potential of splitting these off as a feature request over on the ghost forums (as soon as I get out from under pre-launch hell on the current project).
Here's the official Ghost response:
What is referred as 'official docker image' is not something that we
as a Ghost team support.
The APIs are indeed hosted under the same URL as the admin and that's
by design and not really a bug. Introducing configuration options for
each API Ghost instance hosts would be a feature and should be
discussed at our forum first 👍 I think it's a nice idea to be able to
serve APIs from different host, but it's not something that is within
our priorities at the moment.
In case you need more granular handling of admin site, you could
introduce those on your proxy level and for example, handle requests
that are coming to /ghost/api with a different set of rules.
See the full discussion over here on the TryGhost GitHub:
https://github.com/TryGhost/Ghost/issues/10441#issuecomment-460378033
I haven't looked into what it would take to implement the feature but the suggestion on proxying the request could work... if only I didn't need to run on GKE Multi region (which requires use of GCE-Ingress which doesn't have support for redirection hah!). This would be relatively easy to solve the nGinx ingress.
Hopefully this helps someone — I will update as I work through the process. As of now I solved it by dumping my GCP CloudSQL database down to a SQLite db file during build time (thereby allowing me to keep my admin instance clean and separate from the API endpoint — which for me remains the same URL).

Can I call synchroniseUserDirectories (ConfluenceRpc) via REST, SOAP or XML-RPC?

I am using Confluence 4.2.5 (build 3284) with CAS SSO connected to my LDAP server and would like to be able to call synchroniseUserDirectories() from the LDAP server when a user changes their password so that the change is instantaneous.
The way it works now is that users have to wait for the Confluence to run it's periodic LDAP synchronization which can be disconcerting for them.
I have tried using the XML-RPC interface to call changeUserPassword() (as an administrator) but it doesn't work. The operation raises an exception "Error changing password for user ...". I presume that that is because the user is defined in the LDAP but I can't tell for sure because the exception message wasn't clear about the cause.
Here is example code that I would like to be able to use. It doesn't work.
#!/usr/bin/env python
import xmlrpclib
url = 'https://docs.example.com'
admin_user = 'frobisher'
admin_pass = 'supersecretstuff'
username = 'bigbob'
new_password = 'bigbobsbigsecret'
server = xmlrpclib.ServerProxy(url + '/rpc/xmlrpc')
token = server.confluence2.login(admin_user, admin_pass)
# CITATION: https://developer.atlassian.com/display/CONFDEV/Remote+Confluence+Methods
# this doesn't exist but would be my preferred approach.
# It raises a NoSuchMethodException exception.
server.confluence2.synchroniseUserDirectories(token)
# this throws a general exception, because of the LDAP? The message
# wasn't clear about the source of the problem.
#server.confluence2.changeUserPassword(token,
# username,
# password)
server.confluence2.logout(token)
Is there any way to do this using SOAP or REST? I was concerned about REST because it sounds like it is still a prototype.
If none of those approaches will work, can it be done with a simple plugin considering that this must be a push operation from the LDAP server to the Confluence server? I have no experience writing plugins but I do some java work occasionally.
Any hints would be greatly appreciated.
The short answer is "no". The ability to synchronise remote user directories is not exposed as a remote operation in Confluence.
The long answer is "yes", you can write a plugin to do this. If you're already familiar with java, then perhaps the best answer is to just show you some source code I've written that performs a similar function: https://bitbucket.org/jaysee00/confluence-user-sync-api This plugin gives you SOAP, XML-RPC and JSON-RPC methods to force an individual user account to be synced in to Confluence from a remote directory.
That might suit your purposes as-is, but I imagine it would be possible to edit the source of this plugin and change it to synchronise an entire directory, too.

Query on DNS & connect to existing vm

In my current code base, when i create a VM, DNS name is being dynamically set as same as the instance name. For example, consider if my VM name is "anandInstance", DNS name of the name is being generated as "anandInstance.cloudapp.net". Is there a way to change the DNS name like "dns1.cloudapp.net" during the creation thru REST API??
"Connect to existing VM" , is it possible to achieve this option through REST call? In case "connect to existing.." option , we are getting a list of vms/services to choose and VM is getting created successfully. How to achieve the same using API.
Thanks
In my current code base, when i create a VM, DNS name is being
dynamically set as same as the instance name. For example, consider if
my VM name is "anandInstance", DNS name of the name is being generated
as "anandInstance.cloudapp.net". Is there a way to change the DNS name
like "dns1.cloudapp.net" during the creation thru REST API??
I don't think it is possible. Imagine what a nightmare in the portal would become if you were able to do so? How would you link a Cloud Service (whatever.cloudapp.net) to an actual deployment (MyDemoVm123). However you can use your own domain and have CNAME records pointing to your "want-to-change-for-some-reason.cloudapp.net" (frankly I surely think that soon we will use even longer names)
"Connect to existing VM" , is it possible to achieve this option
through REST call?
Connection to a VM is essentially opening a RDP session. If it a windows VM, you can try using the Download RDP file API call. Once you get the file, just start it with "process.start". If it is linux VM, just start SSH client on port 22 (or one you have defined) from the Cloud Service DNS name you have.
UPDATE
From the azure portal,for stand alone machineoption, we are able to give the dns name with deafult cloudoneapp.net. How to do the same
through the rest api call.any specfic paramter is there to specify the
same?
When you are using the REST API, you first create a Cloud Service (still named hosted service in the REST API) where your machine will be hosted. Here you give the name for that hosted service (the dns name with deafult cloudoneapp.net). Then you call the Create Virtual Machine Deployment API action.
In case "connect to existing.." option , we are getting a list of vms/services to choose and VM is getting created successfully. How to
achieve the same using API.
When you want to get list of all VMs, just get a list of all Hosted Services, then get properties of each and make a guess whether it is a VM or a Cloud Service (maybe by querying for Properties of each service). I don't see a direct access to the list of Virtual Machines. But as this feature being PREVIEW, things might change in the future.
Hope my answer is clear?

Protecting click once web deployed installations

I have a link on my website to the standard publish page generated by Visual Studio. My concern is that if anybody finds out the URL to that page, they can download my software. Sure, I could password protect the page with the link, but it still would not be protecting the download URL. Are there any ways to secure the click once upload? I have looked around, and it seems like I am stuck in this sense.
Public URL is a security issue in ClickOnce Deployment. However, there is a solution for your problem if your web server has windows and .NET installed. Tell me if you have one ? I will have to come up with another workaround for Linux web server in case you have that.
Brief
Firstly, a bit of information about ClickOnce deployment. When you deploy the application, the GET requests on the server made are (assuming WebDir is the publish directory on the server)
G-1. GET /WebDir/setup.exe (Initial download)
G-2. GET /WebDir/MyApp.Application (setup.exe -url request)
G-3. GET /WebDir/MyApp.Application (.application deployment provider URL request)
G-4. GET /WebDir/Application Files/MyApp_1_0_0_0/MyApp.exe.manifest (Application manifest request)
G-5. GET /WebDir/Application Files/MyApp_1_0_0_0/MyApp.exe.deployand other .deploy files ... (Application file requests)
Implementation
Now, the solution is to intercept these file requests on the server. On IIS, you can attach a custom HTTPHandler and handle the request. On Apache, you can redirect requests to a PHP code using .htaccess files. Apart from this, you will have to generate unique identifier uid for client instances downloaded from the server (can be your license key) and put that in the deployment provider URL query parameters.
Directory Structure
Create an "Application" folder inside your WebDir and restrict access to /WebDir/Application/. Rest everything can be there inside /WebDir/
File Requests
So here's what you do on a Apache web server hosted on a windows machine:
Create a custom download page or use the one created from publishing the application using Visual Studio (but you will have to edit it manually!). Let's assume that page is /WebDir/Download.php
After authenticating user from Download.php, you have to send setup.exe from your code (can do it with readfile() in PHP) to the user. However, the catch is bootstrapper (setup.exe) after installing will do a GET request [G-2]. Don't forget now, that you have to validate this file request. So basically you change the "setup.exe -url" property to include uid before returning the file. For eg: change it to /WebDir/uid/MyApp.Application [G-2]. You can use MsiStuff.exe to change the URL property for the bootstrapper.
Using a .htaccess file, rewrite [G-2] to /WebDir/Handler.php?user=uid. From Handler.php, you can check if it is a valid uid. If it is valid, you will have to include the uid in the deployment provider URL and "Dependent Assemblies Path" in deployment manifest so that if an upgrade request comes (It essentially requests the deployment manifest), you can validate the user there too. Add uid to query string parameters. For eg: change it to /WebDir/MyApp.application?user=uid [G-3]. Don't forget that you will have to resign the manifests once you modify them. Use Mage or write your own code to do that.
So finally, the GET requests on the server will be (assuming uid=1f3rd)
G-1. GET /WebDir/Download.phpAction: return setup.exe with the -url changed
G-2. GET /WebDir/Application/setup.exe/1f3rd/MyApp.ApplicationAction: redirect, validate user, change URL, re-sign and return file
G-3. GET /WebDir/Application/setup.exe/MyApp.Application?user=1f3rdAction: redirect, validate user and return file
G-4. GET /WebDir/Application/1f3rd/Application Files/MyApp_1_0_0_0/MyApp.exe.manifestAction: redirect, validate user and return file
G-5. GET /WebDir/Application/1f3rd/Application Files/MyApp_1_0_0_0/MyApp.exe.deployand other .deploy files ...Action: redirect, validate user and return file
Pros
Application is successfully deployed and upgraded only if all the requests have a valid uid in the URL present.
You can now identify different instances of application on client systems. You can track the update history, do a selective version upgrade/downgrade and much more !
Cons
You will need a windows server to implement the above since you need mage.exe | your-own-.NET-code-signing-application and Msistuff.exe.
You may have minor performance issues since you are performing validation on every file request. You can choose to skip validation on .manifest and .deploy file requests.
You will have to ensure proper security for companies certificate which will be present on the web server for signing (You can store it on the server local file-system if you have the full server to yourself. In that case, it is fine unless somebody breaks into machine itself !)
If you want me to make something clear or explain in detail, feel free to ask. In case you have suggestions for modification to the above, post that too.
I will write a detailed CodeProject article if I have spare time someday.