Is there the possibility to limit the access to Service Fabric Explorer to certain services or specific users?
We have a scenario where we host multiple services on the same cluster. The log information of the Explorer shall be only visible for the 'owner' of each service.
No.
You can use access control to limit access to certain cluster
operations for different groups of users. This helps make the cluster
more secure. Two access control types are supported for clients that
connect to a cluster: Administrator role and User role.
Users who are assigned the Administrator role have full access to
management capabilities, including read and write capabilities. Users
who are assigned the User role, by default, have only read access to
management capabilities (for example, query capabilities). They also
can resolve applications and services.
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-security#role-based-access-control-rbac
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-security-roles
You can assign different roles to groups, but you cannot scope a role to a service, so basically its all or nothing, you cannot give granular control
Related
I'd like to limit the privileges afforded to any given user that I create via the Google Terraform provider. By default, any user created is placed in the cloudsqlsuperuser group, and any new database created has that role/group as owner. This gives any user created via the GCP console or google_sql_user Terraform resource total control over any database that is (or was) created in a similar fashion.
So far, the best we've been able to come up with is creating and altering a user via a single-run k8s job. This seems circuitous, at best, especially given that that resource must then be manually imported later if we want to manage it via Terraform.
Is there a better way to create a user that has privileges limited to a single, application-specific database?
I was puzzled by this behaviour too. Its probably not the answer you want but if you can use GCP IAM accounts the user gets created in the PostgreSQL instance with NO roles.
There are 3 types of account you can create from "gcloud sql users create" or terraform module "google_sql_user"
"CLOUD_IAM_USER", "CLOUD_IAM_SERVICE_ACCOUNT" or "BUILT_IN"
The default is the built_in type if not specified.
CLOUD_IAM_USER and CLOUD_IAM_SERVICE_ACCOUNTS get created with NO roles.
We are using these as integration with IAM is useful in lots of ways (no managing passwords at database level is a major plus esp. when used in conjunction with SQL Auth Proxy).
BUILT_IN accounts (ie old school need a postgres username and password) for some reason are granted the "cloudsqlsuperuser" role.
In the absence of being allowed the superuser role on GCP this is about as privileged as you can get so to me (and you) seems a bizarre default.
I have a project whose resources spanned across 3 resource groups. I want to create a Service connection scoped to all those resource groups so that i can manage access at one place through that service connection. Currently i created 3 service connections scoped to each resource group. I don't want to scope it to subscription since there are other teams handling projects in that subscription. It will give me maintenance and audit issues in the future.
If i create a service principal and assign it to 3 resource groups and then attach this service principal to service connection then would it be good design?
Is there any better way to achieve this ?
When you create a new Service Connection in the Azure DevOps, it will create an Azure AD app registration, and a new service principal will be created for the Resource Group you choose.
So you can just go to any resource group and then add a principal using the Access control (IAM). Select the Add role assignment option and then select the role as a contributor in the role grid, press next. From the next screen, select user, group, or service principal as the option for Assign access to. Click on the + Select members, search for our AD Registered app name, the display name and then select the same from the result, click on the select button. Finally, click the Review + assign button.
I have written a detailed article to explain the steps, you can read that here.
You don't have to create the service principal manually. You can use the interface to create the service principal, grant permissions on the first resource group and configure the connection automatically for you.
Then once it's done, look at the service connection to identify the service principal in use, and give it permissions on the other resource groups.
And yes it is a good design, the only drawback compared to 3 service principals is that you have less granularity over who in Azure DevOps has access to each of these 3 resource groups via permissions on the service service connection(s) (as you only have one and not 3)
OPC-UA can restrict access to nodes via the userRole mechanism specified in OPC-UA Part 3, ch. 4.8.3:
When a Client attempts to access a Node, the Server goes through the list of Roles granted to the Session and logically ORs the Permissions for the Role on the Node.
A session should map user identity (or the certificate of the client application) to a role. Thus, the server can grant or deny access to nodes.
standard mapping rules can be used to determine which Roles a Session has access to and, consequently, the Permissions that are granted to the Session.
Now, the configuration schema of the server applications allows defining a UserRoleDirectory where I would expect files that somehow link a user (or client application certificate) to a role.
Unfortunately, the UserRoleDirectory appears to be undocumented.
So, the questions are:
Will the OPC Foundation's .NET sample server indeed use files in UserRoleDirectory for said purpose ?
If so, what format take the files in said directory ?
If not so, how to tie users or client instances to roles ?
I need to retrieve the following current count and capacity(limit) for AWS account:
users
groups
roles
instance profiles
server certificates per AWS account.
I have tried next commands:
Get-EC2AccountAttributes,
Describe methods of Ec2Client class
Thank you in advance
As outlined in Limitations on IAM Entities, the AWS Identity and Access Management (IAM) service limits can be retrieved by means of the GetAccountSummary API.
The respective AWS Tools for Windows PowerShell cmdlet is Get-IAMAccountSummary:
Retrieves account level information about account entity usage and IAM quotas. [...]
Is there a way to give access to a Google Cloud Storage bucket based on the IP address it is coming from.
On Amazon s3, you can just set this in the access policy like this:
"Condition" : {
"IpAddress" : {
"aws:SourceIp" : ["192.168.176.0/24","192.168.143.0/24"]
}
}
I do not want to use a signed url.
The updated answers on this page are only partially correct and should not be recommended for the use case of access control to Cloud Storage Objects.
Access Context Manager (ACM) defines rules to allow access (e.g. an IP address).
VPC Service Controls create an "island" around a project and ACM rules can be attached. These rules are "ingress" rules and not "egress" rules meaning "anyone at that IP can get into all resources in the project with the correct IAM permissions".
The ACM rule specifying an IP address will allow that IP address to access all Cloud Storage Objects and all other protected resources owned by that project. This is usually not the intended result. You cannot apply an IP address rule to an object, only to all objects in a project. VPC Service Controls are designed to prevent data from getting out of a project and are NOT designed to allow untrusted anonymous users access to a project's resources.
UPDATE: This is now possible using VPC Service Controls
No, this is not currently possible.
There's currently a Feature request to restrict google cloud storage bucket by IP Address.
The VPC Service Controls [1] allow users to define a security perimeter around Google Cloud Platform resources such as Cloud Storage buckets (and some others) to constrain data within a VPC and help mitigate data exfiltration risks.
[1] https://cloud.google.com/vpc-service-controls/
I used VPC Service Controls on behalf of a client recently to attempt to accomplish this. You cannot use VPC Service Controls to whitelist an ip address on a single bucket. Jterrace is right. There is no such solution for that today. However, using VPC Service Controls you can draw a service perimeter around the Google Cloud Storage (GCS) service as a whole within a given project, then apply an access level to your service perimeter in the project to allow an ip address/ip address range access to the service (and resources within). The implications are that any new buckets created within the project will be created within the service perimeter and thus be regulated by the access levels applied to the perimeter. So you'll likely want this to be the sole bucket in this project.
Note that the service perimeter only affects services you specify. It does not protect the project as a whole.
Developer Permissions:
Access Context Manager
VPC Service Controls
Steps to accomplish this:
Use VPC Service Controls to create a service perimeter around the entire Google Cloud Storage service in the project of your choosing
Use Access Context Manager to create access levels for ip address you want to whitelist and users/groups who will have access to the service
Apply these access levels to the service perimeter created in the previous step (it will take 30 minutes for this change to take effect)
Note: Best practice would be to provide access to the bucket using a service account or users/groups ACL, if that is possible. I know it isn't always so.