Storing Service bus connection string locally - service

I have a web app hosted in azure posting topics to a service bus queue. The connection string is stored in a secure vault and all is good.
I would like a number of windows services running worker threads on various customers local machines to pick up their own messages, they will then perform a set of tasks based on the message they receive.
How do you secure the service bus connection string for the windows service? Appsettings?

There are a couple options built into Windows Credential Manager, which is a more general app for storing passwords on a machine, and the Windows Data Protection API which helps with encryting/decrypting passwords you can store somewhere on disk. Both make use of the user's password to encrypt and decrypt the passwords.
What I would suggest as an added layer of protection is to issue the customers an Service Principal ID/Secret instead of the Service Bus queue connection string with read access to that particular secret in your Key Vault. It unlinks the dependency between the Service Bus connection string and your customers, i.e. you can rotate your keys without contacting each one, and if you issue individual Service Principals you'll be able to revoke access easily- adding the Service Principals to a security group and giving the group access to the Key Vault will make management much easier.

Related

authentication server microservice, should I use different services for different user functionalities

I have an authentication server using oauth2.
I use it for :
Authentication from the other services, subscription, change and retrieve password etc.
As resource server to store and retrieve more users and groups informations. I have a ManyToMany relationship between users and groups.
Should I seperate the second part of functionalities of this app on another standalone service that will work as resource server only. And only keep the authentication part on the authorization server?
That way I could horizontally scale these two services separately.
Yes, the better idea would be to have the configuration as a separate standalone service running on cloud. With configuration server as a separate service you can add all the authorization and other sort of details like DB details, API details, messaging queue configuration etc, and get connected to N number of services.

SSO using Kerberos on Windows and Linux

We have a client/server based application that is developed internally. Clients and server communicate over a TCP/IP connection with an application-specific protocol. The clients run on Windows and the server runs on Linux. All machines are in the same Active Directory/Kerberos domain/realm.
Currently, the user enters a username and password when they start the application. The server checks the username and password (authentication). Based on the username, the server also determines access to resources (authorization).
We want to add Single Sign-On (SSO) capabilities to the application. That is, we do not want the user to enter a username and password but we want to automatically logon as the current Windows user.
Of course, determining the current Windows user has to be done securely.
I have come up with the following setup:
I use SSPI (Negotiate) on Windows and GSSAPI on Linux.
When the client connects to the server, it uses AcquireCredentialsHandle (Negotiate) to get the credentials of the current Windows user.
The client uses InitializeSecurityContext (Negotiate) to generate a token based on these credentials.
The client sends the token to the server.
The server uses gss_acquire_cred() to get the credentials of the service. These are stored in a .keytab file.
The server receives the token from the client.
The server uses gss_accept_sec_context() to process the token. This call also returns the "source name", that is the current Windows user of the client.
The server uses the "source name" as the username: the server performs no additional authentication. The server still performs authorization.
This works but I do have some questions:
Is this secure? It should not be possible for the client to specify any other username than the Windows user of the client process. If a user has the credentials to create a process as another user (either legally or illegally) than this is allowed.
Should I perform additional checks to verify the username?
Are there alternative ways to achieve SSO in this setup? What are their pros and cons?
What you've described here is the correct way to authenticate the user. You should not have to worry about the user specifying a different name; that's what Kerberos takes care of for you.
If the client is able to obtain a service ticket, then they must have been able to authenticate against the KDC (Active Directory). The KDC creates a service ticket that includes the user's name, and encrypts it with the service's secret key.
The client would not be able to create a ticket for the server with a fake name, because it doesn't have the necessary key to encrypt the ticket.
Of course, this all assumes that you've set everything up correctly; the client should not have access to the service's keytab file for example, and the service should not have any principals in its key tab except its own.
There's a pretty detailed explanation of how it works here.

Security for on-prem/cloud REST Application

I've been reading security articles for several days, but have no formal training in the field. I am developing a configuration and management application for an IoT device. It is meant to be run either on an internal network, or accessed over the web.
My application will be used by IT admins, managers, and factory-floor workers. Depending on the installation, there will be varying levels of infrastructure in place. It could run on a laptop on the floor itself, on a server, or hosted in the cloud. For this reason, we can not assume that our clients will have the kind of infrastructure you might find at a datacenter or in the cloud, for example CAS or NTP.
Our application provides a REST API for client applications to gather data. We'd like to use roles to restrict what data users can access. I've gathered that a common solution for authentication is to encode the username/pass in the REST Header. However, this is completely insecure unless sent over a secure channel.
As I understand it, SSL Certification Authorities grant certs for a specific domain. Our application will have no set domain, and a different IP depending on the installation. Many web applications do not trust self-signed certs. It's not clear to me whether a self-signed application is good enough for a typical application-developer who will be consuming our interface.
With this being the case:
1) What are my options to set up a secure channel, internally or via the web?
2) Am I making assumptions about how our product will be used that damage our users' security unnecessarily?
Well you can use custom encryption to encrypt the data being sent to the applications.
You can also use JSON web tokens to secure your REST API. https://en.wikipedia.org/wiki/JSON_Web_Token. The JSON tokens could be generated by a centralized authentication server and included in all requests sent by the client applications to the server

Is using Redis a violation of REST principles?

I am creating a webapp for data analysis. I want to use Redis to store the data that the user has uploaded so that I can send it to other pages/views. This data is only valid during the session and should expire when the session expires.
Is this a violation of REST principles? Or is this only a problem if I use some value that I have stored server side as session key/identifier?
With your updates what you can do is to upload the data, generate a key against it, place it in Redis and keep it in hash(with meta data) or list(if there could be more than one upload). They list/hash key could be identified by the user id.
Then moving forward, let the client refer to this object using the generated id.
Actually one of the best practices is to use Redis over the internet is to expose a REST API and handle all communication using your Web Server. Redis is always kept in a secure network since Redis doesn't provide any security.
On Redis website
Network security
Access to the Redis port should be denied to everybody but trusted
clients in the network, so the servers running Redis should be
directly accessible only by the computers implementing the application
using Redis.
In the common case of a single computer directly exposed to the
internet, such as a virtualized Linux instance (Linode, EC2, ...), the
Redis port should be firewalled to prevent access from the outside.
Clients will still be able to access Redis using the loopback
interface.
This is also a basic practice when using traditional databases.

BITS, TakeOwnership, and Kerberos / Windows Integrated Authentication

We're using BITS to upload files from machines in our retail locations to our servers. BITS will stop transferring a file if the user who owns the BITS job logs off. Therefore, we're using a Windows Service running as LocalSystem to submit the jobs to BITS and be the job owner. This allows transfers to continue 24/7.
However, it raises a question about authentication. We want the BITS server extensions in IIS to use Kerberos to authenticate the client machine. As far as I can tell, that leaves us with only 2 options, both of which are not ideal: Either we create an "ImageUploader" account and store its username/password in a config file that the Windows Service uses as credentials for the BITS job, or we ask the logged on user who creates the BITS job for his password, and then use his credentials for the BITS job. I guess the third option is not to use Kerberos, and maybe go with Basic Auth plus SSL.
I'm sure I'm wrong and there's a better option. Is there?
(By the way, here's a blurb from BITS
documentation about Service Accounts,
impersonation and BITS):
Service Accounts and BITS You can use
BITS to transfer files from a service.
The service must run as the
LocalSystem, LocalService, or
NetworkService system account. Jobs
created by the system account are
owned by that account. Because system
accounts are always logged on, BITS
transfers the files as long as the
computer is running and there is a
network connection. If a service
running under a system account
impersonates the user before calling
BITS, BITS responds as it would for
any user account (the user must be
logged on). For more details on using
a service with BITS, see the Platform
SDK.
Thanks.