xray newbie- first docker scan from cli - jfrog-xray

I'm starting with JFrog Xray.
I created an account on JFrog cloud platform using my Github credentials.
I created an identity token for authentication.
I have a linux box with only terminal (No GUI).
In that I downloaded a project from Github and built a docker image from the source code.
Now I can see the images using command sudo docker images.
Now I installed the JFrog CLI using command
curl -fL https://getcli.jfrog.io\?setup | sh
The cli gets installed but as there is no browser, the integration with cloud does not happen.
Now I run the docker scan using the command
sudo jf docker scan <image-name> --url <url> --access-token <access-token>
The error I receive is as below
Get "api/v1/system/version": unsupported protocol scheme ""
Any help/ guidance is sincerely appreciated.
Thanks

The curl -fL https://getcli.jfrog.io?setup | sh command installs JFrog CLI and then initiates the jf setup command. The jf setup command does the following:
Opens the default browser, and allows you to sign in to a new and free JFrog environment in the cloud.
Configures JFrog CLI with the new JFrog instance connection details.
Since your Linux box includes no browser, I assume step #1 fails.
No worries though - Since the set up of a free JFrog environment requires a browser, here's what you can do to set up an environment and use it on your Linux box:
Set up the free JFrog environment in the cloud from a different machine with a browser installed, using this page - https://jfrog.com/start-free/#saas
Log into your new environment UI
Go to "Integrations" on the left menu panel
Copy the "JFrog CLI" installation command, and run it from your Linux box
This should get JFrog CLI installed and set up with your new JFrog environment.

Related

How to run a Visual Studio Code compiled/minified Web build in the browser

I am able to run VS Code development mode in my browser by installing the required packages and running a few commands, but I failed to build a compiled and minified version and run the same in browser.
I am able to run VS Code web development mode in the browser on Ubuntu 20.04 with the following commands:
sudo apt-get install build-essential g++ libx11-dev libxkbfile-dev libsecret-1-dev
yarn
yarn watch
./scripts/code-web.sh
I'm able to build with the following command, but am missing instructions on how to run a compiled version of VS Code Web in browser.
yarn gulp vscode-web-min
Can anyone tell me how to or point me to the right documentation?
Recently I came across the same question. I found this repo https://github.com/Felx-B/vscode-web which perfectly answers your question. This is not a fork of vscode, but rather a set of helper scripts in order to build vscode web edition.
You should clone this repository and run the following commands in order to build vscode-web.
$ yarn build
$ yarn prepare-demo
$ yarn demo
Open http://localhost:8080 and you'll see the web version running in your browser. It is limited in features compared to the native or server version. Terminal is disabled and files are served from in-browser file system.

Manage API from Cloud Shell Editor

I try to manage API keys from Cloud Shell Editor to restore an API (autocreated by Firebase) that I deleted.
I try to run gcloud alpha services api-keys undelete BYuihiuYUIGyugIIHU... but i receive Error: ERROR: (gcloud.alpha.services.api-keys.undelete) NOT_FOUND: Method not found.
I try to run (for testing) gcloud alpha services api-keys list command but I receive error: ERROR: (gcloud.alpha.services.api-keys.list) Projects instance [PROJECT_ID] not found: Method not found.
What I'm wrong?
Thank you so much
thank you so much for answer.
The result of commands is:
version -->
Google Cloud SDK 327.0.0
alpha 2021.02.05
beta 2021.02.05
bq 2.0.64
core 2021.02.05
gsutil 4.58
kpt 0.37.1
minikube 1.17.1
skaffold 1.19.0
Component list -->
Installed │ gcloud Alpha Commands │ alpha │ < 1 MiB
I tried to remove and reinstall gcloud-sdk with:
sudo apt purge --autoremove google-cloud-sdk
and sudo apt-get install google-cloud-sdk but nothing has changed
It is probable that (for some reason), your Cloud Shell is not running the latest version of gcloud (Cloud SDK). The latest version is 328.0.0. My Cloud Shell is running 327.0.0 includes the alpha commands (see below) and the commands work for me.
What is the result of the following in Cloud Shell?
gcloud version
gcloud components list does the list include gcloud Alpha Commands?
I'm confident that gcloud Alpha Commands are installed (by default) on Cloud Shell and so suspect that, for some reason, you're running an outdated version of Cloud SDK.
I'm unsure how you can install gcloud Alpha Commands under Cloud Shell if the component isn't installed because using gcloud commands will error under Cloud Shell but it should (!) tell you which apt-get install command you will need:
gcloud components install alpha
I assumed it was not permitted (since Cloud Shell is managed) but it is possible to self-update Cloud SDK in Cloud Shell. The following update will again tell you that you can't run the command in Cloud Shell but, it should give you a set of apt-get install commands that you can use to perform the update:
gcloud components update
Here's a link to the release notes for Cloud SDK. It's not obvious from these notes when these methods were added.

docker ecs command not found

I am trying to use the new docker ecs feature, but just get an error 'ecs' is not a docker command.
Using the latest version of docker edge on macOS 10.15.7
Do I need some additional steps to activate the docker command?
For anyone who was following documentation they found about docker ecs written previous to some development changes - while it used to be a plugin, ECS integration is now part of the docker cli itself.
This document covers how to set it up using a context
https://docs.docker.com/engine/context/ecs-integration/

Using a Azure Devops python artifact repo on a Microsoft machine learning server

I have a SQL Server 2017 instance with Machine Learning services install in database. I have a custom module that I have a wheels package built and published to a Azure Devops python artifact repo that I can install from other machines using the Azure Artifacts keyring module to authenticate.
I want to setup my machine learning server so I can pip install from this azure devops package repo, but after I install the keyring and artifacts-keyring modules per the documentation and try to pip install with the -i option to specify the url to my azure devops package repo I get prompted to authenticate with my username/password. This is different behavior on my development machines (and does not work), on those machines the keyring modules authenticate me automatically.
Looking at the github page for the artifacts-keyring module it looks like I need pip 19.2 or greater, and the machine learning server has pip 9.0.1. Running .\pip.exe install --upgrade pip from the PYTHON_SERVICES directory gives me an error:
The system cannot move the file to a different disk drive: 'e:\\program files\\microsoft sql server\\mssql14.mssqlserver\\python_services\\scripts\\pip.exe' -> 'C:\\Users\\username\\AppData\\Local\\Temp\\7\\pip-qxx3khcz-uninstall\\program files\\microsoft sql server\\mssql14.mssqlserver\\python_services\\scripts\\pip.exe
Going further down the rabbit hole, it looks like i might need to unbind/bind the updated binaries. Has anyone configured their MS machine learning server to use a azure devops python artifact repo as a pip index? Should I approach deploying my modules a different way?
What I did which worked for me:
Stop all of the SQL server services. I think I would have only needed to stop the Jumpstart service though.
Run the basic get-pip.py script from the PYTHON_SERVICES directory that the ML server is using. This installed the latest version of pip, as verified with .\Scripts\pip.exe -V
I then ran .\Scripts\pip.exe install keyring artifacts-keyring
I then installed my module from my index/repo .\Scripts\pip.exe install -i https://myIndexURL/ MyModule
Brought all the SQL services up and confirmed I can use my module.

VSCode: Using dev container remotely without local installation of docker

Currently, I have:
a desktop with low system specs, Windows 7 Pro (without Admin Rights), without docker.
a Virtual Machine with Centos7, and docker installed.
On my desktop, I can either use:
my local installation of VSCode, and Remote - SSH to develop remotely on my VM. It works well, but I can't combine this with Remote - Containers.
X11Forwarding to develop directly with VSCode installed on this VM. I can use Remote - Containers, but X11 is very slow.
Is there a way, with local VSCode, to develop in a remote container, without local installation of docker (obviously with docker installed on the host)?
Is there a way, with local VSCode, to develop in a remote container,
without local installation of docker (obviously with docker installed
on the host)?
No. In the 'advanced containers' docs it says
You can use the Docker CLI locally with a remote Docker host by
setting local environment variables like DOCKER_HOST,
DOCKER_CERT_PATH, DOCKER_TLS_VERIFY. Since VS Code uses the Docker CLI
under the hood, you can use these same environment variables to
connect the Remote - Containers extension to the same remote host.
I added the bolding. Note that it is referring to the client not the remote there. This is from Developing inside a container on a remote Docker host.
Though not officially supported, it seems that it is possible to install Docker CLI without the daemon...
Is it possible to install only the docker cli and not the daemon
Maybe you can do this without admin?
That would, though, certainly be swimming against the grain. Probably your best bet is to stick with the 'remote - SSH' setup you've got going.
I just achieved this using the solution linked by #Tom (but with admin rights, I didn't test it without them)
I downloaded the docker-cli from the docker-cli-builder github repo and created the docker context successfully.
After selecting it in VSCode, it has started using the context allowing me to see the containers on the remote machine.
We have build a small tool called LiveSync which could solve your problem. You simply run
python3 -m pip install livesync
livesync <virtual-machine>
from inside your vscode workspace. It will start watching for changes and push them immediately to the remote. Hence you can code locally (even run your tests) and have all changes synced with your target system.