Change timezone for IBM Bluemix/Cloud Cloud foundry app - ibm-cloud

I have a Python application running as a Cloud Foundry app.
The timezone for the entire container/vm for my application is UTC, even though it is deployed in the US south region.
It's not just that messages in the log files are from the future, but some parts of the application rely on the current time. I tried to change the time zone via SSH from the application's management page but of course I do not have permissions.
I notice something similar on DSX.
Questions:
How do I change the time zone for the application and the container/vm it is running in?
Shouldn't the timezone be set to whatever region it is deployed in?

Related

Change the Database Address of an existing Meteor App running on a Ubuntu Cloud Server

I have a Meteor App running on a Ubuntu Droplet on Digital Ocean (your basic virtual machine). This app was written by a company that went out of business and left us with nothing.
The database is a MongoDB currently running on IBM Compose. Compose is shutting down in a month and the Database needs to be moved and our App needs to connect to the new database.
I had no issues exporting and creating a MongoDB with all the data on a different server.
I cannot for the life of me figure out where on the live Meteor App server I would change the address of the database connection. There is no simple top level config file where I can change this?? Does anyone out there know where I would do this?
I realize that in the long term I will need to either rewrite or deprecate this aging app, but in the short term the company relies on it and IBM decided to just shut down their Compose service so please help!!
There is mostly the MONGO_URL and MONGO_OPLOG_URL that are configured as environment variable: https://docs.meteor.com/environment-variables.html#MONGO-OPLOG-URL
Now you don't set these within the code but during deployment. If you are running on localhost and want to connect to the external MongoDb you can simply use:
$ MONGO_URL="mongodb://user:password#myserver.com:port" meteor
If you want to deploy the app, you should stick with the docs: https://galaxy-guide.meteor.com/mongodb.html#authentication
If you use MUP then configure the mongo appropriately: https://meteor-up.com/docs.html#mongodb
Edit: If your app was previously deployed using MUP you can try to restore the environment variables from /opt/app-name/config (where app-name is the name of your app) which contains env.list (including all environment variables; thus your MONGO_URL) and start.sh which you can use to recreate the mup.js config.

404 Error with accessing media in Strapi sometime after uploading

Some context:
I have Strapi deployed on Heroku successfully with a MongoDB backend, and can add/edit entries. My issue comes when I upload an image using the media library plug in. I'm able to upload an image, and have my frontend access it initially, displaying it etc. after sometime, like the next day or in an hour or so, the history of the file is present, as can be seen with this endpoint:
https://blog-back-end-green.herokuapp.com/upload/files/
However, the url endpoint to access the media doesn't work as it used to, and I get a 404 error when I follow it to the endpoint. e.g.
https://blog-back-end-green.herokuapp.com/uploads/avatarperson_32889bfac5.png
New to Strapi so any help/guidance appreciated
The docs address your question directly:
Like with project updates on Heroku, the file system doesn't support
local uploading of files as they will be wiped when Heroku "Cycles"
the dyno. This type of file system is called ephemeral, which means
the file system only lasts until the dyno is restarted (with Heroku
this happens any time you redeploy or during their regular restart
which can happen every few hours or every day).
Due to Heroku's filesystem you will need to use an upload provider
such as AWS S3, Cloudinary, or Rackspace. You can view the
documentation for installing providers here and you can see a list of
providers from both Strapi and the community on npmjs.com.
When your app runs, it consumes dyno hours of HEROKU
When your app idles (automatically, after 30 minutes of inactivity), as long as you have dyno hours, your app will be live and publicly accessible.
Generally, Authentication failures return a 401 (unauthorized) error but in some platforms, 404 error can also return.
Check Your second request does have the correct Authorization header
Check out role-permissions

How to change Date and Time settings on Google Cloud Instance?

I am trying to change Date and time settings to UTC+10 Canberra,Sydney,Melbourne on the instance but it always keep rolling back to UTC+00 Monrovio, Reykjavik. Doesn't matter even if I select set time zone to automatic.enter image description here
The zone "australia-southeast1-b" on the provided screenshot is a deployment area for Google Cloud Platform resources, where the physical hosts, your VM instance is running on, are physically located. This is a geographical zone. It is not relevant to time.
To configure Date and Time in Windows, you should:
set correct time zone in Windows and
make sure a time server is reachable
Google Cloud Engine VM instance is just a virtual machine that boots up with hardware clock set to UTC as many modern servers do nowadays.
If you looked at the VM instance logs in the GCP Console you'd see that VM BIOS reports time in UTC
2019/10/3 14:9:44 Begin firmware boot time
After a while BIOS hands over to the bootloader
2019/10/3 14:9:45 End firmware boot time
Booting from Hard Disk 0...
The OS boots up. Behind the scene the OS time service recognizes the system timezone, then sets up and synchronizes time with the time source. From that time forward running programs and services report events based on the local system time:
...
2019/10/03 09:10:05 GCEWindowsAgent: GCE Agent Started (version 4.6.0#1)
In the Windows Event Log you should see entries made by the Time-Service:
Log Name: System
Source: Time-Service
Level: Information
The time provider NtpClient is currently receiving valid time data from metadata.google.internal,0x1 (ntp.m|0x1|0.0.0.0:123->169.254.169.254:123).
The time service is now synchronizing the system time with the time source metadata.google.internal,0x1 (ntp.m|0x1|0.0.0.0:123->169.254.169.254:123).
In the command prompt you can ensure that the time configuration and state are correct:
C:\Users\user>systeminfo | find /i "Time"
System Boot Time: 10/3/2019, 9:09:49 AM
Time Zone: (UTC-06:00) Central Time (US & Canada)
Hence you don't need synchronizing time neither manually or with any startup script. The time service will do this for you: synchronize the system time shortly after the system boot and keep it in sync afterwards.
All you need is to set correct Time zone and the Internet time server for Windows, and then make sure the time server is reachable via the network.
If you can't wait for the timesync cycle completion, you can logon to Windows and force time synchronization manually:
net stop W32Time
net start W32Time
w32tm /resync /force
To O.P
Answer to your question if I understand correctly, your answer is:
timedatectl set-timezone "Australia/Melbourne"

How to check syslog in GAE

I am using the standard environment of GAE / PY.
I'd like to confirm syslog, but I could not confirm it with stackdriver logging(only request_log and activity_log).
Can not you confirm syslog with App Engine standard environment?
You do not have access to the actual syslog of the systems on top of which your standard env GAE app instances are running. I suspect the user-provided app code might not even have write permissions to such logfile.
If you're looking for logs produced by your app (for example by using the logging facility, as recommended by ), they're bundled under the request_log entries for the requests that triggered the respective code execution, see Reading Application Logs on Google App Engine from Developer Console

Is there a recommended 3rd party solution to managing logs on Bluemix?

We have a handful of Ruby (Rails/Sinatra) apps and are looking for an easy means of managing retention, search and analysis of our logs for these applications.
The initial problem was that every time we'd push a new version of our apps the logs would disappear.
We then started streaming our logs to a file via a terminal using:
cf logs AppName
however the logfiles get very big, very fast and quickly become a problem.
We know that the Bluemix Monitoring and Analytics service provides a lot of the function we need. We're looking that over but want to know if there are other recomended/proven options.
Thanks
We found several 3rd party apps that provide the functions we need.
To use any of these we first had to configure third party logging on Bluemix and used the steps below.
Any 3rd party logger that supports the syslog protocol can be used. The initial setup, registration and configuration of the log manegement service, is well covered at Configuring Selected Third-Party Log Management Services.
What will come out of the configuration step is a syslog URL which will be the destination for your logs.
Once the logging service is configured a user-provided service instance needs to be created to stream the logs to the logging service. We did this using:
cf create-user-provided-service <user-provided-service_name> -l <syslog_URL>
Last step is to bind the service instance to our Ruby apps.
cf bind-service AppName <user-provided-service_name>
For the changes to take effcet, we then had to restage our ruby apps:
cf restage AppName
There was a brief delay between when we'd see the logs generated and when they'd show up in the logger service but overall this is working out ok for us so far.