Google Compute engine and no files in bucket - google-cloud-storage

I have a client and whoever designed their site put it in Compute Engine. I am totally lost, no clue about this. I do see a bucket but there is only a footer.php in it. The site is a multi wordpress and I can not find where the files are stored or how to access phpmyadmin to see the database.
I ask this because the site is having many issues, starting with ssl expired, php is out of date and now I can not login or see the site because it is giving a 500 error or white page of death.
Tried to find what caused the error but nothing.
Site is http://nextstudy.org
Can anyone help or direct me on what I can do to get to the files and maybe get it off of compute engine?
Appreciate you reading this............
Diana

GCE does not host files from a bucket, but it runs VM instances off disk images.
Unless being assigned an admin role in Cloud IAM, there's probably not much to do. And even with an admin role granted, it's still rather risky when having no clue, I mean, while it's only a single instance, Cloud Shell might help, but when it's an instance group, the deployment may work whole different (up until the point where the servers are spun up from nothing but a shell script, which subsequently makes editing individual instances quite meaningless).

Related

Import a custom Linux image for POWER-IAAS part of the IBM Cloud?

I am trying to import a cloud-enabled Debian Linux image for the Power architecture to run on the IBM public cloud, which supports this architecture.
I think I am following the instructions, but the behavior I am seeing is that, at image-import-time, after filling in all the relevant information, when I hit the "import" button, the GUI just exits silently, with no apparent effect, and no reported error.
I am reasonably experienced doing simple iaas stuff on AWS, but am new to the IBM cloud, and have not deployed a custom image on any cloud provider. I'm aware of "cloud-init", and have a reasonable general knowledge of what problem it solves (mapping cloud-provider metadata to config entries in the resulting VM at start-time), but not a great deal about how it actually works.
What I have done is:
Got an IBM cloud account, and upgraded out of the free tier, for access to Power.
Activated the Power Systems Virtual Server service.
Activated the Cloud Object Storage service.
Created a bucket in the COS.
Created an HMAC-enabled service credential for this bucket.
Uploaded my image, in .tar.gz format, to the bucket (via the CLI, it's too big to upload by GUI).
The image is from here -- that page is a bit vague on which cloud providers it may be expected to work with, but AFAIK the IBM cloud is the only public cloud supporting Power?
Then, from the Power Systems Virtual Server service page, I clicked the "Boot Images" item on the left, to show the empty list, then "Import Image" at the top of the list, and filled in the form. I have answers for all of the entries -- I can make up a new name, I know the region of my COS, the image file name" (the "key", in key-object storage parlance), the bucket name, and the access key and secret keys, which are available from the credential description in the COS panel.
Then the "import" button lights up, and I click it, and the import dialog disappears, no error is reported, and no image is imported.
There are various things that might be wrong that I'm not sure how to investigate.
It's possible the credential is not connected to the bucket in the right way, I didn't really understand the documentation about that, but in the GUI it looks like it's in the right scope and has the right data in it.
It's also possible that only certain types of images are allowed, and my image is failing some kind of validation check, but in that case I would expect an error message?
I have found the image-importing instructions for the non-Power-IAAS, but it seems like it's out of scope. I have also found some docs on how to prepare a custom image, but they also seem to be non-Power-IAAS.
What's the right way to do this?
Edit to add: Also tried doing this via the CLI ("ibmcloud pi image-import"), where it gets a time-out, apparently on the endpoint that's supposed to receive the image. Also, the command-line tool has an --os-type flag that apparently only takes [aix | sles | redhat | ibmi] -- my first attempt used raw, which is an error.
This is perhaps additional evidence that what I want to do is actually impossible?
PowerVS supports only .ova images. Those are not the same supported by VMWare, for instance.
You can get from here https://public.dhe.ibm.com/software/server/powervs/images/
Or you can use the images available in the regional pool of images:
ibmcloud pi image-list-catalog
Once you have your first VM up and running you can use https://github.com/ppc64le-cloud/pvsadm to create a new .ova. Today the tool only supports RHEL, CentOS and CoreOS.
If you want to easily play with PowerVS you can also use https://github.com/rpsene/powervs-actions.

Unable to open Google Cloud DNS page

I am unable to access Google Cloud DNS page.
All it shows is:
"DNS API is being enabled. This may take a minute or more."
Then it reloads and repeats showing the same message.
The API is already enabled, and the records I created works. No problem with DNS.
I need to modify records, but I can't because of this problem.
I tried opening the page in different computers and different browsers without addons, same result.
If there is a better place to ask, please do tell.
Thank you.
You should be able to access the page regardless of what computer / browser you're using.
If you cannot it's either a temporary outage which you can check here or a bug.
The only thing to do here is to contact paid support for more immediate help and if the time is something you can afford report this at Google's IssueTracker and get help for free - however it may take a few days. It is possible that only you are affected. Please describe the issue in as much detail as possible - this will expidete the process.

PostgreSQL only clear user password

I am having a big problem, quite difficult to find/search.
I have a server in Ubuntu, where inside that server I have installed:
GITLAB (have all proyect)
POSTGRESSQL (Independent gitlab database is used for a personal project)
TOMCAT with APP WEB (Springboot, this use postgres)
This server is still for testing, it is used for specific specific things (I mean, its use and access is limited and controlled)
I am having various problems:
This server is still for testing, it is used for specific specific things (I mean, its use and access is limited and controlled)
Very frequently, almost every day, the user postgres from the postgresql server "erases" the password. Without anyone doing it manually, "it happens exponentially". I notice why the application stops responding, and then I access postgresql and note that the postgres user has no password.
I looked for many places, and I can't find anything. I really don't know where else to look. If someone passed it to you or has information about it, I would be grateful if you could provide it to me.
------More information added----------
I was looking at the postgres logs, before I have no authentication and I see this.
There are times when no one could have been using the springboot server,
--2020-01-17 00:30:21.286
And also the two log that show before that moment. Could it be something that is deleting my password?
Thank you.
PostgreSQL does not randomly delete its own passwords, and I really doubt Tomcat or Gitlab do either. Indeed they shouldn't even have access to the server as the 'postgres' user or any other superuser, and so shouldn't be able to even if they wanted.
It seems like that there is an intruder in your system. After gaining access they create their own user with their own password. Then disabling your normal superuser from logging on is a common way to try to prevent you from regaining control and kicking them out. Do any users exist that you do not recognize?
The bit of the log file you posted clearly shows someone trying to guess your password, starting at 2:58. You aren't logging IP addresses (%h) so it doesn't show where they are coming from. It doesn't show that they succeed, but unless you have log_connections = on, it wouldn't show successes.

Is it possible to restrict access by ip

I've spent the day searching for a way to restrict the google cloud storage json API to only accept calls from our server ip (note, I am using the java client).
I found a really old post that seemed to indicate that it was possible. https://groups.google.com/forum/m/#!searchin/gs-discussion/Whitelist$20/gs-discussion/nTwMuygttbA
But things seem to have changed since then.
I tried looking in the quota section in the console but can't find anything there either.
Is this possible? Where can it be configured?

Postgres Encryption of configuration files

Currently in Postgres the largest security hole is the .conf files that the database relies on, this is because someone with access to the system (not necessarily the database) can modify the files and gain entry. Because of this I am seeking out resources on how to encrypt those .conf files and then decrypt them during each session of the database. Performance is not really an issue at this point. Does anyone have any resources on this or has anyone developed any prototypes that utilize this functionality?
Edit
Since there seems to be some confusion here about what it is I am asking. The scenario can best be illustrated on a Windows box with the following groups:
1) Administrators System Administrators
2) Database Administrators Postgres Administrators
3) Auditors Security Auditors
The Auditors group typically needs access to log files and configuration files to ensure system security. However, the issue comes when a member of the Auditors group needs to view the Postgres configuration and log files. If this member decides that they want to access the database even though they do not have a database account it is a very short task to break in . How does one go about preventing this? Answers such as: Get better auditors are quite poor as you can never fully predict what people will do.
You are fine. No need to encrypt, so long as you have permissions on the *.conf files correct.
Your postgresql.conf and pg_hba.conf should both be marked as readable only by the postgres user/group. If you don't have actual users with those permissions, then only root can see them.
So, are you trying to prevent root from making changes? Cause just a normal user can't change those files, and if you don't trust root, you've already lost.
I think you might be stuck - here's what you said:
The Auditors group typically needs access to log files and configuration files
and then:
How does one go about preventing [Auditors from accessing the database using the values in the configuration files]?
If you really want to let Auditors get at your config files but are nervous about them accessing your database, your best bet would be to move your config files off of your server to somewhere else - and then make sure Auditors don't actually have access to your production systems. They could still look at the log files all they wanted, but they wouldn't be able to access the database server to try to get at the database itself.