When running gcloud init on PowerShell, the scripts seems to stop short and won't keep running.
gcloud init
What it shows:
python.exe : Welcome! This command will take you through the configuration of gcloud.
At C:\Users\minh.tran\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\gcloud.ps1:117 char:3
+ & "$cloudsdk_python" $run_args_array
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (Welcome! This c...tion of gcloud.:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
Settings from your current configuration [dev-finops] are:
accessibility:
screen_reader: 'False'
core:
account: minh.tran#hireright.com
disable_usage_reporting: 'True'
project: prj-d-finops
proxy:
address: 10.35.2.194
password: HireR1ght2022!
port: '8080'
type: http
username: corp\minh.tran
Pick configuration to use:
[1] Re-initialize this configuration [dev-finops] with new settings
[2] Create a new configuration
[3] Switch to and re-initialize existing configuration: [default]
gcloud init running on cloud shell or cmd will return the following and wait for user input:
Welcome! This command will take you through the configuration of gcloud.
Settings from your current configuration [dev-finops] are:
accessibility:
screen_reader: 'False'
core:
account: minh.tran#hireright.com
disable_usage_reporting: 'True'
project: prj-d-finops
proxy:
address: 10.35.2.194
password: HireR1ght2022!
port: '8080'
type: http
username: corp\minh.tran
Pick configuration to use:
[1] Re-initialize this configuration [dev-finops] with new settings
[2] Create a new configuration
[3] Switch to and re-initialize existing configuration: [default]
Please enter your numeric choice:
How do I get gcloud init to run on PowerShell? What issue do I have here?
Please see the below requirement:
Cloud SDK requires Python; supported versions are Python 3 (3.5 to 3.9). By default, the Windows version of Cloud SDK comes bundled with Python 3. To use Cloud SDK, your operating system must be able to run a supported version of Python.
In order to install python 3.9 on windows, just use this installer https://www.python.org/ftp/python/3.9.13/python-3.9.13-amd64.exe.
Be sure to close and reopen the powershell prompt after installation in order to load the new PATH where python executable is added.
Related
I am currently developing a project where I need to get the pod names of a Kubernetes Cluster running on Rancher using Ansible. The main thing here is that I have a couple of problems that are preventing me from advance.
I am currently executing a playbook to try to retrieve this information, instead of running a CLI command, because I want to manipulate those Rancher machines later one (e.g. install an rpm file).
Here is the playbook that I am executing tot try to retrieve the pods' names from Rancher:
---
- hosts: localhost
connection: local
remote_user: root
roles:
- role: ansible.kubernetes-modules
- role: hello-world
vars:
ansible_python_interpreter: '{{ ansible_playbook_python }}'
collections:
- community.kubernetes
tasks:
-
name: Gather openShift Dependencies
python_requirements_facts:
dependencies:
- openshift
-
name: Get the pods in the specific namespace
k8s_info:
kubeconfig: '/etc/ansible/RCCloudConfig'
kind: Pod
namespace: redmine
register: pod_list
-
name: Print pod names
debug:
msg: "pod_list: {{ pod_list | json_query('resources[*].status.podIP') }} "
- set_fact:
pod_names: "{{pod_list|json_query('resources[*].metadata.name')}}"
The problem is that I am getting a Kubernetes module error each time I am trying to run the playbook:
ERROR! the role 'ansible.kubernetes-modules' was not found in community.kubernetes:ansible .legacy:/etc/ansible/roles:/home/jcp/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/ roles:/etc/ansible
The error appears to be in '/etc/ansible/GetKubectlPods': line 7, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
roles:
- role: ansible.kubernetes-modules
^ here
If I remove that line on the code, Where I try to retrieve that role, I still get a similar error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'kubernetes'
fatal: [localhost]: FAILED! => {"changed": false, "error": "No module named 'kubernetes'", "msg": "Failed to import the required Python library (openshift) on localhost.localdomain's Python /usr/bin/python3.6. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"}
I have already tried to install ansible-galaxy kubernetes module on the machine and openshift.
Not sure what I am doing wrong since there are so many possibilities for what could be going wrong here.
Ansible Version Output:
ansible 2.9.9
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/jcp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jcp/.local/lib/python3.6/site-packages/ansible
executable location = /home/jcp/.local/bin/ansible
python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
I've debugged my python_required_info output from openshift dependencies and this is what I have:
ok: [localhost] => {
"openshift_dependencies": {
"changed": false,
"failed": false,
"mismatched": {},
"not_found": [],
"python": "/usr/bin/python3.6",
"python_system_path": [
"/tmp/ansible_python_requirements_info_payload_5_kb4a7s/ansible_python_requirements_info_payloa d.zip",
"/usr/lib64/python36.zip",
"/usr/lib64/python3.6",
"/usr/lib64/python3.6/lib-dynload",
"/home/jcp/.local/lib/python3.6/site-packages",
"/usr/local/lib/python3.6/site-packages",
"/usr/local/lib/python3.6/site-packages/openshift-0.10.0.dev1-py3.6.egg",
"/usr/lib64/python3.6/site-packages",
"/usr/lib/python3.6/site-packages"
],
"python_version": "3.6.8 (default, Nov 21 2019, 19:31:34) \n[GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]" ,
"valid": {
"openshift": {
"desired": null,
"installed": "0.10.0.dev1"
}
}
}
}
Thanks for your help in advance!
Edit: The below answer was given for OP's specific Ansible version (i.e. 2.9.9) and is still valid if you still use it. Since version 2.10, you also need to install the relevant ansible collection if not already present
ansible-galaxy collection install kubernetes.core
See the latest module documentation for more information
In Ansible 2.9.9, you're not supposed to do anything special to use the module except installing the needed python dependencies. See the module documentation for your Ansible version
remove the line - role: ansible.kubernetes-modules, unless it is a module of yours in which case you have to tell us more because this is not a correct declaration.
remove the collection declaration
Add the following task somewhere before using the module:
- name: Make sure python deps are installed
pip:
name: openshift
Your actual python_requirement_facts task is doing nothing else than reporting the dependency is not found. Register the result and debug it to see for yourself.
Now use the k8s_info module normally.
I have a k8s_object rule to apply a deployment to my Google Kubernetes Cluster. Here is my setup:
load("#io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "image",
data = [":lib", "//:package.json"],
entry_point = ":index.ts",
)
load("#io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "k8s_deployment",
template = ":gateway.deployment.yaml",
kind = "deployment",
cluster = "gke_cents-ideas_europe-west3-b_cents-ideas",
images = {
"gcr.io/cents-ideas/gateway:latest": ":image"
},
)
But when I run bazel run //services/gateway:k8s_deployment.apply, I get the following error
INFO: Analyzed target //services/gateway:k8s_deployment.apply (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //services/gateway:k8s_deployment.apply up-to-date:
bazel-bin/services/gateway/k8s_deployment.apply
INFO: Elapsed time: 0.113s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
$ /snap/bin/kubectl --kubeconfig= --cluster=gke_cents-ideas_europe-west3-b_cents-ideas --context= --user= apply -f -
2020/02/12 14:52:44 Unable to publish images: unable to publish image gcr.io/cents-ideas/gateway:latest
error: no objects passed to apply
error: no objects passed to apply
It doesn't push the new image to the Google Container Registry.
Strangely, this worked a few days ago. But I didn't change anything.
Here is the full code if you need to take a closer look: https://github.com/flolude/cents-ideas/blob/069c773ade88dfa8aff492f024a1ade1f8ed282e/services/gateway/BUILD
Update
I don't know if this has something to do with this issue but when I run
gcloud auth configure-docker
I get some warnings:
WARNING: `docker-credential-gcloud` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
WARNING: Your config file at [/home/flolu/.docker/config.json] contains these credential helper entries:
{
"credHelpers": {
"asia.gcr.io": "gcloud",
"staging-k8s.gcr.io": "gcloud",
"us.gcr.io": "gcloud",
"gcr.io": "gcloud",
"marketplace.gcr.io": "gcloud",
"eu.gcr.io": "gcloud"
}
}
Adding credentials for all GCR repositories.
WARNING: A long list of credential helpers may cause delays running 'docker build'. We recommend passing the registry name to configure only the registry you are using.
gcloud credential helpers already registered correctly.
I had google-cloud-sdk installed via snap install. What I did to make it work is to remove google-cloud-sdk via
snap remove google-cloud-sdk
and then followed those instructions to install it via
sudo apt install google-cloud-sdk
Now it works fine
We got a Service Fabric cluster in our development environment using 2 VMs. I was trying to upgrade the Application deployed in SF using the following command:
Start-ServiceFabricApplicationUpgrade -ApplicationName "fabric:/ApplicationName" -ApplicationTypeVersion "3.7.2625.0" -UnMonitoredAuto
I get the following error as a result:
Start-ServiceFabricApplicationUpgrade : Application type and version not found
At line:1 char:1
+ Start-ServiceFabricApplicationUpgrade -ApplicationName "fabric:/ApplicationName" ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (Microsoft.Servi...usterConnection:ClusterConnection) [Start-ServiceFabricApplicationUpgrade], FabricElementNotFoundException
+ FullyQualifiedErrorId : UpgradeApplicationErrorId,Microsoft.ServiceFabric.Powershell.StartApplicationUpgrade
I would like to know whether there are any configurations I need to change at the ClusterConfiguration level. Any help would be appreciated.
Thanks.
There are 3 simple steps to upgrade an application:
Copy-ServiceFabricApplicationPackage
Register-ServiceFabricApplicationType
Start-ServiceFabricApplicationUpgrade
By the message you posted, the error is likely because you missed the step 2.
If you have executed the step 1, 2 and 3, certify that:
The application package has been registered correctly
The application version you registered is correct, both match, the package and the Upgrade command
The existing application and the registered one are from same type
Check this doc for more info: Service Fabric application upgrade using PowerShell
i'm try to use wercker,
but i don't know my testing can't connect into my mongodb.
i'm using sails + sails mongo, and when npm test...i'm always get error can connect into mongo db, this is my wercker.yml :
box: nodesource/trusty:0.12.7
services:
- id: mongo:2.6
# Build definition
build:
# The steps that will be executed on build
steps:
- script:
name: set NODE_ENV
code: export NODE_ENV=development
# A step that executes `npm install` command
- npm-install
# A step that executes `npm test` command
- npm-test
# A custom script step, name value is used in the UI
# and the code value contains the command that get executed
- script:
name: echo nodejs information
code: |
echo "node version $(node -v) running"
echo "npm version $(npm -v) running"
this is my error message :
warn: `sails.config.express` is deprecated; use `sails.config.http` instead.
Express midleware for passport
error: A hook (`orm`) failed to load!
1) "before all" hook
2) "after all" hook
0 passing (2s)
2 failing
1) "before all" hook:
Uncaught Error: Failed to connect to MongoDB. Are you sure your configured Mongo instance is running?
Error details:
{ [MongoError: connect ECONNREFUSED] name: 'MongoError', message: 'connect ECONNREFUSED' }
at net.js:459:14
2) "after all" hook:
Uncaught Error: Failed to connect to MongoDB. Are you sure your configured Mongo instance is running?
Error details:
{ [MongoError: connect ECONNREFUSED] name: 'MongoError', message: 'connect ECONNREFUSED' }
at net.js:459:14
While out of the box, MongoDB has no authentication so you just have to provide to sails the right host and port.
Define a new connection in your sails app in config/connection.js:
mongodbTestingServer: {
adapter: 'sails-mongo',
host: process.env.MONGO_PORT_27017_TCP_ADDR,
port: process.env.MONGO_PORT_27017_TCP_PORT
},
Concerning MONGO_PORT_27017_TCP_ADDR and MONGO_PORT_27017_TCP_PORT, these 2 environment variable are created by Wercker when you declared a mongo service. Like That, you will be able to connected your application to your database with the right host and port.
Add a new environment in your sails sails app in config/env/testing.js. It will be used by Wercker :
module.exports = {
models: {
connection: 'mongodbTestingServer'
}
};
In your wercker file wercker.yml. I recommend you to use the ewok stack (based on Docker), you can active it in the settings of your application. Here is some useful informations concerning migration to Ewok stack. My example use a box based on a Docker image.
# use the latest official stable node image hosted on DockerHub
box: node
# use the mongo (v2.6) image hosted on DockerHub
services:
- id: mongo:2.6
# Build definition
build:
steps:
# Print node and npm version
- script:
name: echo nodejs information
code: |
echo "node version $(node -v) running"
echo "npm version $(npm -v) running"
- script:
name: set NODE_ENV
code: |
export NODE_ENV=testing
# install npm dependencies of your project
- npm-install
# run tests
- npm-test
To see all environment variables in your Wercker build, add this line :
- script:
name: show all environment variables
code: |
env
It should work.
I have read the cookbook regarding deploying my symfony2 app to production environment. I find that it works great in dev mode, but the prod mode first wouldn't allow signing in (said bad credentials though I signed in with those very credentials in dev mode), and later after an extra run of clearing and warming up the prod cache, I just get http500 from my prod route.
I had a look in the config files and wonder if this has anything to do with it:
config_dev.php:
imports:
- { resource: config.yml }
framework:
router: { resource: "%kernel.root_dir%/config/routing_dev.yml" }
profiler: { only_exceptions: false }
web_profiler:
toolbar: true
intercept_redirects: false
monolog:
handlers:
main:
type: stream
path: %kernel.logs_dir%/%kernel.environment%.log
level: debug
firephp:
type: firephp
level: info
assetic:
use_controller: true
config_prod:
imports:
- { resource: config.yml }
#doctrine:
# orm:
# metadata_cache_driver: apc
# result_cache_driver: apc
# query_cache_driver: apc
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
nested:
type: stream
path: %kernel.logs_dir%/%kernel.environment%.log
level: debug
I also noticed that there is a routing_dev.php but no routing_prod, the prod encironment works great however on my localhost so... ?
In your production environment when you run the app/console cache:warmup command you need to make sure you run it like this: app/console cache:warmup --env=prod --no-debug Also, remember that the command will warmup the cache as the current user, so all files will be owned by the current user and not the web server user (eg: www-data). That is probably why you get a 500 server error. After you warmup the cache run this: chown -R www-data.www-data app/cache/prod (be sure to replace www-data with your web server user.
Make sure your parameters.ini file has any proper configs in place since its common for this file to not be checked in to whatever code repository you might be using. Or (and I've even done this) its possible to simply forget to put parameters from dev into the prod parmeters.ini file.
You'll also need to look in your app/logs/prod.log to see what happens when you attempt to login.