Is it possibe to have multiple kerberos tickets on same machine? - kerberos

I have a use case where I need to connect to 2 different DBS using 2 different accounts. And I am using Kerberos for authentication.
Is it possible to create multiple Kerberos tickets on same machine?
kinit account1#DOMAIN.COM (first ticket)
kinit account2#DOMAIN.COM (second ticket)
Whenever I do klist, I only see most recent ticket created. It doesn't show all the tickets.
Next, I have a job that needs to first use ticket for account1 (for connection to DB1) and then use ticket for account2 (for DB2).
Is that possible? How do I tell in DB connection what ticket to use?

I'm assuming MIT Kerberos and linking to those docs.
Try klist -A to show all tickets in the ticket cache. If there is only one try switching your ccache type to DIR as described here:
DIR points to the storage location of the collection of the credential caches in FILE: format. It is most useful when dealing with multiple Kerberos realms and KDCs. For release 1.10 the directory must already exist. In post-1.10 releases the requirement is for parent directory to exist and the current process must have permissions to create the directory if it does not exist. See Collections of caches for details. New in release 1.10. The following residual forms are supported:
DIR:dirname
DIR::dirpath/filename - a single cache within the directory
Switching to a ccache of the latter type causes it to become the primary for the directory.
You do this by specifying the default ccache name as DIR:/path/to/cache on one of the ways described here.
The default credential cache name is determined by the following, in descending order of priority:
The KRB5CCNAME environment variable. For example, KRB5CCNAME=DIR:/mydir/.
The default_ccache_name profile variable in [libdefaults].
The hardcoded default, DEFCCNAME.

Related

Hyperledger Fabric CA - Storing the identity materials the correct way

Currently I have a VM running and installed the binaries needed for fabric-ca. I have a docker-compose file looking like this:
I have some questions regarding this:
the docker-compose file will create one container, if I want it for
more organizations, do I need to copy/paste this and change the port
number? (I don't want to use intermediate CAs).
When registering/enrolling an identity, it will override the default
materials because It will always put the materials from the new identity in /etc/hyperledger/fabric-ca-client. So when creating multiple
identities (orderer, peers, users etc..) how do I need to organize
them? What's the best practise?
In the image you can see that the server and clients are specified,
is this a good approach? Or should the client and the server be a
different container?
More than one CA in a Docker Compose file - you can look at the Build your first network tutorial in the Fabric Docs which has a 2 Org network and various configuration files including Docker Compose.
Combined client/server Container - This might be convenient for testing, but in a production scenario definitely not for Security and Operational Integrity reasons.
Overwriting Identities - the enroll command writes a tree of data to the location specified by the environment variable FABRIC_CA_CLIENT_HOME but you can use --home to redirect the tree to a different location:
fabric-ca-client enroll -u http://Jane:janepw#myca.example.com:7054 --home /home/test/Jane/

Checkout issue from Windows ClearCase client

A user cannot checkout from windows client ClearCase ( see picture)
And yet, the same user can checkout from a unix client.
Why?
Thanks for your answer #VonC
Please find my below findings
Here the primary group of the vob is
/usr/atria/bin/cleartool desc vob:/vobs/MCT
versioned object base "/vobs/MCT"
created 2010-03-03T16:42:52+02:00 by Admin.WTD (wtadmin.wtusers#frmrssucc004)
"MSS Access"
master replica: xh_mct_athens#/vobs/MCT
replica name: xh_mct_athens
VOB family feature level: 5
modification by remote privileged user: allowed
atomic checkin: disabled
VOB ownership:
owner *********servername***/ca_xhvadm
group eelinnis.emea.nsn-net.net/ccusers_xhaul_athens
ACLs enabled: No
Attributes:
FeatureLevel = 5
Hyperlinks:
AdminVOB -> vob:/vobs/MPTADMIN
And user id output is
id karageor
uid=61333334(karageor) gid=8003(ccusers_xhaul_athens)
groups=7000(hostingusers_cic_athens),8003(ccusers_xhaul_athens)
and on the windows the primary group is set as
Is the unix group ccusers_xhaul_athens has to be set as windows primary group on system variable
Kindly confirm
The main factor which explain a permission issue in a ClearCase interop (Windows ClearCase client - Linux ClearCase server) is the CLEARCASE_PRIMARY_GROUP environment variable.
That variable (CLEARCASE_PRIMARY_GROUP) needs to:
be set to the primary group of the vob of the element the user is trying to checkout (primary or secondary: type cleartool describe -l vob:\YourVob to list them)
be the same value as the primary group of the Linux user (who can successfully checkout the same element in his/her Linux ClearCase view): type id -a to see that primary group eelinnis.emea.nsn-net.net/ccusers_xhaul_athens
Make sure on Windows the user is not launching the client with another account (Administrator, or System account), and the CLEARCASE_PRIMARY_GROUP is set.
(and the number of group is not too high)
You can use the creds utility to see your credentials.
See more at "ClearCase won't allow Check-In" and use the credmap utility to verify the group assignments between Windows and Unix.
Type set CL in a CMD shell in Windows to see the actual full value of the CLEARCASE_PRIMAY_GROUP environment variable (??_EE_CLEARCASE_USERS_XHAUL_ATHENS)
You need to see if that group maps to the Linux one.
Check also the protection associated to your view. See fix_prot on Windows here.

How to read multiple config file from Spring Cloud Config Server

Spring cloud config server supports reading property files with name ${spring.application.name}.properties. However I have 2 properties files in my application.
a.properties
b.properties
Can I get the config server to read both these properties files?
Rename your properties files in git or file system where your config server is looking at.
a.properties -> <your_application_name>.properties
a.properties -> <your_application_name>-<profile-name>.properties
For example, if your application name is test and you are running your application on dev profile, below two properties will be used together.
test.properties
test-dev.properties
Also you can specify additional profiles in bootstrap.properties of your config client to retrieve more properties files like below. For example,
spring:
profiles: dev
cloud:
config:
uri: http://yourconfigserver.com:8888
profile: dev,dev-db,dev-mq
If you specify like above, below all files will be used together.
test.properties
test-dev.properties
test-dev-db.prpoerties
test-dev-mq.properties
Note that the provided answer assumes your property files address different execution profiles. If they dont, i.e., your properties are split into different files for some other reason, e.g., maintenance purposes, divided by business/functional domain, or any other reason that suits your needs, then, by defining a profile for each such file, you are just "abusing" the profile feature, for achieving your goal (multiple property files per app).
You could then ask "OK, so what is the problem with that?". The problem is that you restrain yourself from various possibilities that you would otherwise have. If you actually want to customize your application configuration by profile you will have to create pseudo, sub, profiles for that since the file name is already a profile. Example:
Your application configuration could be customized by different profiles, which you use inside your springboot application (e.g. in #Profile() annotation), let them be dev, uat, prod. You can boot your application setting different profiles as active, e.g. 'dev' vs 'uat', and get the group of properties that you desire. For your a.properties b.properties and c.properties file, if different file names were supported, you would have a-dev.properties b-dev.properties and c-dev.properties files vs a-uat.properties b-uat.properties and c-uat.properties files for 'dev' and 'uat' profile.
Nevertheless, with the provided solution, you already have defined 3 profiles for each file: appname-a.properties appname-b.properties, and appname-c.properties: a, b, and c. Now imagine you have to create a different profile for each... profile(! it already shows something goes wrong here)! you would end up with a lot of profile permutations (which would get worse as files increase): The files would be appname-a-dev.properties, appname-b-dev.properties, app-c-dev.properties vs appname-a-uat.properties, appname-b-uat.properties, app-c-uat.properties, but the profiles would have been increased from ['dev', ' uat'] to ['a-dev', 'b-dev', 'c-dev', 'a-uat', 'b-uat', 'c-uat'] !!!
Even worse, how are you going to cope with all these profiles inside your code and more specifically your #Profile() annotations? Will you clutter the code space with "artificial" profiles just because you want to add one or two more different property files? It should have been sufficient to define your dev or uat profiles, where applicable, and define somewhere else the applicable property file names (which could then be further supported by profile, without any other configuration action), just as it happens in the externalized properties configuration for individual springboot apps
For argument completeness, I will just add here that if you want to switch to .yml property files one day, with the provided profile-based naming solution, you also loose the ability to define different "yaml document sections per profile" inside the same .yml file (Yes, in .yml you can have one property file yet define multiple logical yml documents inside, which its usually done for customizing the properties for different profiles, while having all related properties in one place). You loose the ability because you have already used the profile in the file name (appname-profile.yml)
I have issued a pull request with a minor fix for spring-cloud-config-server 1.4.x, which allows defining additionally supported file names (appart from "application[-profile]" and "{appname}[-profile]", that are currently supported) by providing a spring.cloud.congif.server.searchNames environment property - analogous to spring.config.name for springboot apps. I hope it gets reviewed and accepted.
I came across the same requirement lately with a little more constraint that I am not allowed to play around the environment profiles. So I wasn't allowed to do as the accepted answer. I'm sharing how I did it as an alternative to those who might have same case as me.
In my application, I have properties such as:
appxyz-data-soures.properties
appxyz-data-soures-staging.properties
appxyz-data-soures-production.properties
appxyz-interfaces.properties
appxyz-interfaces-staging.properties
appxyz-interfaces-production.properties
appxyz-feature.properties
appxyz-feature-staging.properties
appxyz-feature-production.properties
application.properties // for my use, contains local properties only
bootstrap.properties // for my use, contains management properties only
In my application, I have these particular properties set that allow me to achieve what I needed. But note I have the rest of needed config as well (enable cloud config, actuator refresh, eureka service discovery and so on) - just highlighting these for emphasis:
spring.application.name=appxyz
spring.cloud.config.name=appxyz-data-soures,appxyz-interfaces,appxyz-feature
You can observe that I didn't want to play around my application name but instead I used it as prefix for my config property files.
In my configuration server I configured in application.yml to capture pattern: 'appxyz-*':
spring:
cloud:
config:
server:
git:
uri: <git repo default>
repos:
appxyz:
pattern: 'appxyz-*'
uri: <another git repo if you have 1 repo per app>
private-key: ${git.appxyz.pk}
strict-host-key-checking: false
ignore-local-ssh-settings: true
private-key: ${git.default.pk}
In my Git repository I have the following. No application.properties and bootstrap because I didn't want those to be published and overridden/refreshed externally but you can do if you want.
appxyz-data-soures.properties
appxyz-data-soures-staging.properties
appxyz-data-soures-production.properties
appxyz-interfaces.properties
appxyz-interfaces-staging.properties
appxyz-interfaces-production.properties
appxyz-feature.properties
appxyz-feature-staging.properties
appxyz-feature-production.properties
It will be the pattern matching pattern: 'appxyz-*' that will capture and return the matching files from my git repository. The profile will also apply and fetch the correct property file accordingly. The prioritization of value is also preserved.
Furthermore, if you wish to add more file in your application (say appxyz-circuit-breaker.properties), we only need to do:
Add the name pattern in the spring.cloud.config.name=...,appxyz-circuit-breaker
The add the copies of the file locally and also externally (in the git repo.
No need to add/modify more or restart your configuration server later on. For new application, it's like a one time registration thing to add an entry under the repos of application.yml.
Hope it helps in one way or another!
In your application bootstrap.properties, you have to specify like below:
spring.application.name=a,b

Should files involved in SSL certificate be kept confidential (added to .gitignore)?

In the process of setting up an SSL certificate for my site, several different files were created,
server.csr
server.key
server.pass.key
site_name.crt
Should these be added to .gitignore before pushing my site to github?
Apologies in advance if this is a dumb question.
Should these be added to .gitignore before pushing my site to github?
They should not be in the repo at all, meaning stored outside of the repo.
That way:
you don't need to manage a .gitignore,
you can store those keys somewhere safe.
GitHub actually had to change it search feature back in 2013 after seeing users storing keys and passwords in public repositories. See the full story.
The article includes this quote:
The mistakes may reflect the overall education problem among software developers.
When you have expedited programs—"6 weeks and you'll be a real software developer"—to teach developing, security becomes an afterthought. And considering "90 percent of development is copying stuff you don't understand, I'd bet most of them simply don't know what id_rsa is"
In 2016, this "book" (as a joke) reflects that:
The OP adds:
I think Heroku requires putting the files into the repo in order to run ">heroku certs:add server.crt server.key" and setup the cert.
"Configuration and Config Vars" is one illustration on that topic:
A better solution is to use environment variables, and keep the keys out of the code. On a traditional host or working locally you can set environment vars in your bashrc file. On Heroku, you use config vars.
The article "Heroku: SSL Endpoint" does not force you to have those key and certificate in your code. They can be generated directly on Heroku and saved anywhere else for safekeeping. Just not in a git repo.
I would like to add to #VonC 's answer, as it is in fact more complicated:
The files have different content, and depending on that they require a different access control:
server.csr: This is a certificate signing request file. It is generated from the key (server.key in your case) and used to create the certificate (site_name.crt in your case). This should be deleted when the certificate has been created. It should not be shared with untrusted parties.
server.key: This is the private key. Under no circumstances can this file be shared outside of the server. It cannot end up in a code repository. On the system it must be stored with 0600 permissions (i.e. read only) owned by either root or the web server user. At least in Linux, in Windows user access rights are handled differently, but it has to be done similarly.
site_name.crt: This is the signed certificate. This is considered to be public. The certificate is essentially the public key. It is sent out to everyone that connects to the server. It can be stored in the repository. (The hint from #VonC is correct, code and data should be separated, but it can be e.g. in a separate repository for the deployment).
server.pass.key: Don't know what this is, but it seems to contain the password to get access to the key. If this is the case the same rules as for the key apply: Never share with anyone.

Oracle ZFS Storage appliance: How to configure SMB share level ACLs via REST API?

I am developing a script which uses the REST API for an Oracle ZFS Storage appliance ("ZS3"). The script uses the API to make a snapshot and clone of a production environment for use as a temporary test environment. So far everything is great... except I can find no way to specify the "Share Level ACL" settings for the SMB protocol.
A manual (via web ui) clone results in a default ACL of "everyone, full access". The ACL for the original share (source for the snapshot/clone) has a specific user list with specific ACLs. I assume that this information is not in the ZFS snapshot, but maintained outside of ZFS, hence it is not present in the clone (Q: Is this correct?).
I've re-read the Oracle document "E56084.pdf" ("Oracle ZFS Storage Appliance RESTful API Guide, Release 2013.1.4.0") a few times. There are vague references to the "sharesmb" property, and nothing else related to SMB or ACLs. My script correctly sets the "sharesmb" value (used to enabling SMB sharing) to "sharesmb=SHARENAME,abe=off,dfsroot=false" in the JSON payload passed to the API for creating a file system clone. However, I see no property that I can set for the actual ACL list. For NFS, this is easy, it is the value passed in the "sharenfs" property.
The result of a "GET" of the source project and share do not contain any reference to the users listed in the "SMB Share Level ACL" as seen in the web UI.
So, how do I copy over, or explicitly set if necessary, the "SMB Share Level ACLs" on a share via the REST api?
Thanks!
The system has two different kinds of ACLs and both are stored inside your datasets:
ACLs on all files and directories (let's call them file ACLs): These are used for general Unix access and also are active when sharing the filesystem. They are stored with each file or directory (use /usr/bin/ls -V /pool/filesystem/yourFile or /usr/bin/ls -Vd /pool/filesystem/yourDir to see them).
ACLs on filesystems shared via SMB/CIFS protocol (let's call them share ACLs): These are only used when sharing the filesystem and can only be set for the whole filesystem, not individual files inside. Use /usr/bin/ls -V /pool/filesystem/.zfs/shares/yourShareName to see them.
Unfortunately I do not know how to to that over the REST API, but at least you know where your ACLs should end up.