Error when following instructions on "Hosting scopes" - bit.dev

I was trying to follow the instructions on hosting scopes (https://bit.dev/docs/scope/running-a-scope-server).
Commands entered:
docker run -it -p 4000:3000 bitcli/bit-server:latest
http://localhost:4000
bit remote add http://localhost:4000
I get the following error after the bit remote add command:
error: scope not found at /Users/tdugger/development/bit
There must be a step missing. The browser page shown does say the following, but I'm not sure what that means.
Set "defaultScope": "remote-scope" in workspace.jsonc
file and export components here.
Thanks for your help.

Related

GCloud auth login: Ports 8085 and 8184 possible blocked

I have installed and uninstalled and reinstalled GCloud on MACOS Monterey (Chipset M1) and I'm facing the next situation: When I run in Terminal gcloud auth login, it displays the next message:
WARNING: Failed to start a local webserver listening on any port between 8085 and 8184. Please check your firewall settings or locally running programs that may be blocking or using those ports.
WARNING: Defaulting to --no-browser mode.
You are authorizing gcloud CLI without access to a web browser. Please run the following command on a machine with a web browser and copy its output back here. Make sure the installed gcloud version is 372.0.0 or newer.
I have tried in many ways to install: The last one was this:
curl https://sdk.cloud.google.com | bash
exec -l $SHELL #restart shell
But I still facing that message.
Anybody couls help me with this?
This happens because my Internet provider has blocked these ports. There will be to make some fixes to the router.
Patch solution for this:
gcloud auth login --no-launch-browser
Follow the instructions given on Terminal

Adding localhost as a remote scope to my Bit.dev project returns a permission denied error

I'd like to add a local scope server as a remote scope to my project. However, when I run bit remote add "http://localhost:3001" I get a error: permission to scope "" (http://localhost:3001) was denied along with a troubleshoot url that redirects me to the Bit homepage.
I'm running bit version 0.0.779, and work within a company proxy.
Setting up and running my local scope
Following these bare scope directions I do the following:
mkdir my-scope
cd my-scope
bit init --bare
bit start --port localhost:3001
Adding remote scope
In an existing, local react project I do the following:
bit init
bit add /path/to/my/compnent
bit remote add http://localhost:3001
Problem
Running bit remote add http:localhost:3001 returns:
error: permission to scope "" (http://localhost:3001) was denied
see troubleshooting at https://legacy-docs.bit.dev/docs/setup-authentication#authentication-issues
Does anyone know why I might be getting denied from setting localhost as my remote scope?
You can use the docker container to simplify the process of hosting scopes.
see details here - https://bit.dev/docs/scope/running-a-scope-server
regardless, it seems that the issue is just a typo, as you forgot to set // in the host url parameter of your command.

doctl install: neither $XDG_CONFIG_HOME nor $HOME are defined

Using Github actions to configure doctl (digitalocean) but getting this error when the job runs in the runner:
>>> doctl version v1.77.0 installed to /opt/github/actions-runner/_work/_tool/doctl/1.77.0/x64
/opt/github/actions-runner/_work/_tool/doctl/1.77.0/x64/doctl auth init -t ***
Error: neither $XDG_CONFIG_HOME nor $HOME are defined
Any suggestions or tips on how to get past this? Search DO's website & here, couldn't find any relevant material. Thanks!

How to resolve DNS lookup error when trying to run example microservice application using minikube

Dear StackOverflow community!
I am trying to run the https://github.com/GoogleCloudPlatform/microservices-demo locally on minikube, so I am following their development guide: https://github.com/GoogleCloudPlatform/microservices-demo/blob/master/docs/development-guide.md
After I successfully set up minikube (using virtualbox driver, but I tried also hyperkit, however the results were the same) and execute skaffold run, after some time it will end up with following error:
Building [shippingservice]...
Sending build context to Docker daemon 127kB
Step 1/14 : FROM golang:1.15-alpine as builder
---> 6466dd056dc2
Step 2/14 : RUN apk add --no-cache ca-certificates git
---> Running in 0e6d2ab2a615
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/main: DNS lookup error
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/community: DNS lookup error
ERROR: unable to select packages:
git (no such package):
required by: world[git]
Building [recommendationservice]...
Building [cartservice]...
Building [emailservice]...
Building [productcatalogservice]...
Building [loadgenerator]...
Building [checkoutservice]...
Building [currencyservice]...
Building [frontend]...
Building [adservice]...
unable to stream build output: The command '/bin/sh -c apk add --no-cache ca-certificates git' returned a non-zero code: 1. Please fix the Dockerfile and try again..
The error message suggest that DNS does not work. I tried to add 8.8.8.8 to /etc/resolv.conf on a minikube VM, but it did not help. I've noticed that after I re-run skaffold run and it fails again, the content /etc/resolv.conf returns to its original state containing 10.0.2.3 as the only DNS entry. Reaching the outside internet and pinging 8.8.8.8 form within the minikube VM works.
Could you point me to a direction how can possible I fix the problem and learn on how the DNS inside minikube/kubernetes works? I've heard that problems with DNS inside Kubernetes cluster are frequent problems you run into.
Thanks for your answers!
Best regards,
Richard
Tried it with docker driver, i.e. minikube start --driver=docker, and it works. Thanks Brian!
Sounds like issue was resolved for OP but if one is using docker inside minikube then below suggestion worked for me.
Ref: https://github.com/kubernetes/minikube/issues/10830
minikube ssh
$>sudo vi /etc/docker/daemon.json
# Add "dns": ["8.8.8.8"]
# save and exit
$>sudo systemctl restart docker

Docker dotnet run port not mapping, windows 10 host, linux container

I'm following a https://app.pluralsight.com/library/courses/docker-web-development/table-of-contents which uses the older microsoft/aspnetcore-build image but I'm running core 2.1 so I'm using microsoft/dotnet:2.1-sdk instead.
The command I'm running is:
docker run -it -p 8080:5001 -v ${pwd}:/app -w "/app"
microsoft/dotnet:2.1-sdk
and then once inside the TTY I do a dotnet run which gives me the following output:
Using launch settings from /app/Properties/launchSettings.json...
info:
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0]
User profile is available. Using '/root/.aspnet/DataProtection-Keys'
as key repository; keys will not be encrypted at rest.
info:
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[58]
Creating key {5445e854-c1d9-4261-82f4-0fc3a7543e0a} with creation date
2018-12-14 10:41:13Z, activation date 2018-12-14 10:41:13Z, and
expiration date 2019-03-14 10:41:13Z.
warn:
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key
{5445e854-c1d9-4261-82f4-0fc3a7543e0a} may be persisted to storage in
unencrypted form.
info:
Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[39]
Writing data to file
'/root/.aspnet/DataProtection-Keys/key-5445e854-c1d9-4261-82f4-0fc3a7543e0a.xml'.
warn: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to bind to https://localhost:5001 on the IPv6 loopback
interface: 'Cannot assign requested address'.
warn: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to bind to http://localhost:5000 on the IPv6 loopback
interface: 'Cannot assign requested address'.
Hosting environment: Development
Content root path: /app
Now listening on: https://localhost:5001
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
Then, when I open browser on my host and navigate to http://localhost:8080 I get a "This page isn't working" "localhost didn't send any data" " ERR_EMPTY_RESPONSE"
I've tried a couple different port combinations too with the same result.
Can anyone spot where I went wrong? Or have any ideas / suggestions?
Not sure if this question still relevant for you, but I also encountered this issue and left my solution here for others. I used PowerShell with the next docker command (almost the same as your command, just used internal port 90 instead of 5000 and used --rm switch which will automatically remove the container when it exits):
docker run --rm -it -p 8080:90 -v ${pwd}:/app -w "/app" microsoft/dotnet /bin/bash
And after that, I got the interactive bash shell, and when typing dotnet run I got the same output as you and cannot reach my site in the container via localhost:8080.
I resolved it by using UseUrls method or --urls command-line argument. They (UseUrls method or --urls command-line argument) indicates the IP addresses or host addresses with ports and protocols that the server should listen on for requests. Below descriptions of solutions which worked for me:
Edit CreateWebHostBuildermethod in Program.cs like below:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseUrls("http://+:90") //for your case you should use 5000 instead of 90
.UseStartup<Startup>();
You can specify several ports if needed using the next syntax .UseUrls("http://+:90;http://+:5000")
With this approach, you just typed dotnet run in bash shell and then your container will be reachable with localhost:8080.
But with the previous approach you alter the default behavior of your source code, which you can forget and then maybe should debug and fix in the future. So I prefer 2nd approach without changing the source code. After typing docker command and getting an interactive bash shell instead of just dotnet run type it with --urls argument like below (in your case use port 5000 instead of 90):
dotnet run --urls="http://+:90"
In the documentation there is also a 3rd approach where you can use ASPNETCORE_URLS environment variable, but this approach didn't work for me. I used the next command (with -e switch):
docker run --rm -it -p 8080:90 -v ${pwd}:/app -w "/app" -e "ASPNETCORE_URLS=http://+:90" microsoft/dotnet /bin/bash
If you type printenv in bash you will see that ASPNETCORE_URLS environment variable was passed to the container, but for some reason dotnet run is ignoring it.