Does anyone know of a good (i.e. efficient) way of getting notifications of filesystem mounts/unmounts and/or new/removed devices in Solaris (10), without requiring root?
I'm trying to avoid simply polling /etc/mnttab for new/removed/changed entries, so my first prototype involved using sysevent_subscribe_event to listen for device events, and then using the device information in conjunction with /etc/mnttab to get the mount point. That works well, but sysevent_subscribe_event requires root privileges to run, and I'm not going to have access to the end-user's box, so I can't really elevate their privileges.
I imagine this could be quite tricky, given the restriction of running without root, but any help gratefully received!
Solaris 10 has role based access control so if you have root access to the box you can grant your user the authority to use the service. man roles and auth to get started, also here are two pages that look usefull:
http://www.softpanorama.org/Solaris/Security/solaris_rbac.shtml
http://www.sun.com/software/whitepapers/wp-rbac/wp-rbac.pdf
marc
Related
I have a client and whoever designed their site put it in Compute Engine. I am totally lost, no clue about this. I do see a bucket but there is only a footer.php in it. The site is a multi wordpress and I can not find where the files are stored or how to access phpmyadmin to see the database.
I ask this because the site is having many issues, starting with ssl expired, php is out of date and now I can not login or see the site because it is giving a 500 error or white page of death.
Tried to find what caused the error but nothing.
Site is http://nextstudy.org
Can anyone help or direct me on what I can do to get to the files and maybe get it off of compute engine?
Appreciate you reading this............
Diana
GCE does not host files from a bucket, but it runs VM instances off disk images.
Unless being assigned an admin role in Cloud IAM, there's probably not much to do. And even with an admin role granted, it's still rather risky when having no clue, I mean, while it's only a single instance, Cloud Shell might help, but when it's an instance group, the deployment may work whole different (up until the point where the servers are spun up from nothing but a shell script, which subsequently makes editing individual instances quite meaningless).
Maybe an easy question for someone who knows Powershell and O365 well. Is there a way to configure it so when a command is run for example to pull all access to a shared mailbox, that either a service account is permissioned each time to pull that information or the user who is running the script? I looked at connecting an SA to the script but it would have too much access to 0365 to give it the specific permissions. So the account is not permissioned for the access by default but every time the script/command is ran its permissioned for that inquiry which it shows then it won't have access until the next time its called.
Looking to add this type of function to a script which we only want the helpdesk people to see the information when they run the script and the specific command in the script.
Hopefully explained clear enough :)
Thanks all.
I don't think there is a way to do that natively. You could fiddle something with Azure PIM but that's more for one-off operations than minute action that are done often.
You could however circumvent that by making some sort of web interface that triggers commands on another server using a privileged SA and returns the output through the web interface. You can just make it so that the interface can only request one specific command to be run, and the only thing you have to worry about is sanitizing your parameters well to avoid unwanted injection.
Alternatively, what are you trying to protect against by restricting access so much ? Isn't it something that could be done more easily using a read-only account and some clearly defined policy ? If your helpdesk people overstep their allowed scope, that's a management/HR problem as much as a technical one.
I am having a big problem, quite difficult to find/search.
I have a server in Ubuntu, where inside that server I have installed:
GITLAB (have all proyect)
POSTGRESSQL (Independent gitlab database is used for a personal project)
TOMCAT with APP WEB (Springboot, this use postgres)
This server is still for testing, it is used for specific specific things (I mean, its use and access is limited and controlled)
I am having various problems:
This server is still for testing, it is used for specific specific things (I mean, its use and access is limited and controlled)
Very frequently, almost every day, the user postgres from the postgresql server "erases" the password. Without anyone doing it manually, "it happens exponentially". I notice why the application stops responding, and then I access postgresql and note that the postgres user has no password.
I looked for many places, and I can't find anything. I really don't know where else to look. If someone passed it to you or has information about it, I would be grateful if you could provide it to me.
------More information added----------
I was looking at the postgres logs, before I have no authentication and I see this.
There are times when no one could have been using the springboot server,
--2020-01-17 00:30:21.286
And also the two log that show before that moment. Could it be something that is deleting my password?
Thank you.
PostgreSQL does not randomly delete its own passwords, and I really doubt Tomcat or Gitlab do either. Indeed they shouldn't even have access to the server as the 'postgres' user or any other superuser, and so shouldn't be able to even if they wanted.
It seems like that there is an intruder in your system. After gaining access they create their own user with their own password. Then disabling your normal superuser from logging on is a common way to try to prevent you from regaining control and kicking them out. Do any users exist that you do not recognize?
The bit of the log file you posted clearly shows someone trying to guess your password, starting at 2:58. You aren't logging IP addresses (%h) so it doesn't show where they are coming from. It doesn't show that they succeed, but unless you have log_connections = on, it wouldn't show successes.
If the DB2 uses OS authentication and I delete a DB2 user at the OS level, what will be the impact? Will the DB2 still work fine, and will those privileges that I granted to the user still available after the user is created back?
When asking for help with Db2 please mention your Db2-server platform (Z/os , i-series, Linux/Unix/Windows). The reason is that the answer be different per platform. There are also special tags for your question that you can use to indicate the Db2-platform (db2-zos, db2-400, db2-luw).
If you remove the operating system user the impact is that user can no longer connect to the Db2-database(s) . But any GRANTS that were previously created and stored inside the database(s) will remain unchanged (unless something REVOKES them), even if they will not be used after all pre-existing connections by that removed-operating-system-user are terminated.
For Db2-Linux/Unix/Windows, if you recreate the user in the operating system the previous GRANTS will reapply only if they are still present inside the database and the user successfully reconnects. This behaviour may be different on other platforms.
If the Db2-server is configured with special plugins for security, or uses LDAP or other external tooling then the answer can also be different.
We are using perforce in my company and heavily rely on it. I need some suggestion for the following scenario:
Our Depot structure is something like this:
//depot
/product1
/component1
/component2
.
.
/componentN
/*.java
/*.xml
/product2
/component1
/component2
.
.
/componentN
/*.java
/*.xml
Every product has multiple components and every component consist of java or xml or some other program file. Every component has a manager/owner associated with it.
Right now, we have blocked the write permissions for every user and only when it is approved by the manager/owner after code review, we open the write permission for that user for any file/folder to check in. This process becomes a little untidy because the manager/developer have to wait for perforce admin to allow permissions (update protections table of perforce). Also, we give them a window of only 24 hrs to check in (due to agile, which i dont understand much :)), after which we are supposed to block the write access again for that user.
What I am looking for is a mechanism where perforce admins can delegate this responsibility to respective managers/owners without giving them super user or admin access and which automatically disables the write permission after 24 hrs.
Any suggestions ?
Thanks in advance.
There's nothing to do this out of the box, per se.
The closest thing I can think of is if the mainline version of these components were permissioned by a group with an owner. The owner of the group is allowed to add and remove members from the group, thus delegating the permissioning to the "gatekeeper" rather than the admins, themselves.
Let me know if you require further clarification about this.
One common solution is to build a simple tool which reads and writes the protections table, the group memberships, etc., to implement the policies that you desire.
The protections and groups data are not complex in format, and you can easily write a little bit of text-processing code that writes and re-writes these specs according to your needs.
Then install your tool on the server machine in a secure fashion, granting the tool the rights to update the protections table, and have your component administrators use the tool to manage the permissions.
For example, I've seen this done by writing a small web application, in Java or Perl for example, installing that on a web server on a secure machine, and letting the component admins operate that tool through a web interface.
All your tool has to provide is (a) a simple login/logout mechanism for your component admins (the web server may already do this for you), (b) a command that takes a user name and a folder name and grants permission, and (c) a command (or a timer) that removes that permissions subsequently.