I'm going to move several terabytes of data from a DRA bucket onto a Nearline bucket and I wish to utilize the new Rewrite API, which I understand requires gsutil 4.12:
gsutil -m cp -r gs://my-dra-bucket/* gs://my-nearline-bucket/
But even after having run gcloud components update, I'm still on gsutil 4.11. Is there any other way to update to gsutil 4.12?
I'm on CentOS 7.
Note that Cloud SDK incorporates updates to the underlying tools approximately every 2 weeks, so if you are attempting to update to a recently created release / pre-release of gsutil it may not yet be available via the Cloud SDK.
You can get a copy of newest gsutil from PyPi.
Related
I have a CI/CD system uploading numerous large deb's into a Google Cloud Artifact Registry for Apt packages. The normal upload time is roughly 10 seconds for the average package. Yesterday all of the upload commands to this artifact registry started to hang until they are either terminated by an external trigger or timeout (over 30 minutes).
Any attempt to delete packages from the registry timeout without deleting the package.
The command I have been using to upload is:
gcloud artifacts apt upload ${ARTIFACT_REPOSITORY} --location=${ARTIFACT_LOCATION} --project ${ARTIFACT_PROJECT} --source=${debPackageName} --verbosity=debug
I started by updating all Gcloud versions to the latest version
Google Cloud SDK 409.0.0
alpha 2022.11.04
beta 2022.11.04
bq 2.0.81
bundled-python3-unix 3.9.12
core 2022.11.04
gcloud-crc32c 1.0.0
gsutil 5.16
I try to delete packages thinking perhaps the artifact registry was getting bloated using this command:
gcloud artifacts packages delete --location={LOCATION} --project {PROJECT} --repository={REPOSITORY} {PACKAGE} --verbosity=debug
But I consistently get:
"message": "Deadline expired before operation could complete."
The debug output from the original command and the delete command both spam this kind of message:
DEBUG: https://artifactregistry.googleapis.com:443 "GET /v1/projects/{PROJECT}/locations/{LOCATION}/operations/f9885192-e1aa-4273-9b61-7b0cacdd5023?alt=json HTTP/1.1" 200 None
When I created a new repository I was able to upload to it without the timeout issues.
I'm the lead for Artifact Registry. Firstly apologies that you're seeing this kind of latency with update operations to Apt Repositories. They are likely caused by regenerating the index for the repo. The bigger the repo gets, the longer this takes.
If you do a bunch of individual uploads/deletes, this causes the index generation to queue up, and you're getting timeouts. We did change some of the locking behavior around this recently, so we may have inadvertently swapped one performance issue with another.
We are planning to stop doing the index generation in the same transaction as the file modification. Instead we'll generate it asynchronously, and will look at batching or de-duping so that less work is done for a large number of individual updates. It will mean that the index isn't up-to-date the moment the upload call finishes, but will be eventually consistent.
We're working on this now as a priority but you may not see changes in performance for a few weeks. The only real workaround is to do less frequent updates or to keep the repositories smaller.
Apologies again, we definitely want to get this working in a performant way.
Is possible to access or copy (transfer) a git Google Cloud Source repository to Google Cloud Storage.
The idea is to use the git repo as a website like GitHub Pages.
You can do this as follows:
clone the Google Cloud Source repo
use gsutil cp -r dir1/dir2 gs://my_bucket/subdir to copy the contents of the data to Google Cloud Storage, possibly after processing (e.g., if you want to use something like Jekyll or Middleman to generate your website). Note that this will also copy your .git directory as well, which you might want to exclude.
I have an MVC4 + EF4.0 .NET 4.5 project (say, MyProject) I'm able to run the project locally just fine. When I FTP deploy it to Azure Websites (not cloud service) it runs fine too. However, if I do a GIT deploy, the site 'runs' for the most part until it does some EF5.0 database operations. I get an exception Unable to load the specified metadata resource.
Upon debugging I noticed that if I:
GIT deploy the entire MVC4 project (as before)
FTP in and then replace bin\MyProject.dll with the bin\MyProject.dll file that I just built locally (Windows 8 x64, VS2012, Oct'12 Azure tools) after the GIT push (i.e. same source)
then the Azure hosted website runs just fine (even the EF5.0 database functionality portion).
The locally built .dll is about 5KB larger than the Azure GIT publish built one and both are 'Release' mode. It's obvious that the project as built after the GIT push (inside Azure) is being built differently than as on my own PC. I checked the portal and it's set to .NET 4.5. I'm also GIT pushing the entire solution folder (with just one project) and not just small bits and pieces.
When I load the locally built as well as the remotely built MyProject.dll files, I noticed the following difference(FrameworkDisplayName)
local: System.Runtime.Versioning.TargetFrameworkAttribute(".NETFramework,Version=v4.5", FrameworkDisplayName = ".NET Framework 4.5"),
remote: System.Runtime.Versioning.TargetFrameworkAttribute(".NETFramework,Version=v4.5", FrameworkDisplayName = ""),
Anyone knows why this is happening and what the fix might be?
Yes, this is a bug that will be fixed in the next release. The good news is that it's possible to work around it today:
First, you need to use a custom deployment script, per this post.
Then you need to change the MSBuild command line in the custom script per this issue.
Credit goes to David above for the pointers and hints. I voted him up but I'll also post the exact solution to the issue here. I've edited my original post because I found there was a major bug that I didn't notice until I started from scratch (moved GIT servers). So here is the entire process, worked for me.
Download Node.JS (it's needed even for .NET projects because the GIT deploy tools use it)
Install the azure-cli tool (open regular command prompt => npm install azure-cli -g)
In the command prompt, cd to the root of your repository (cd \projects\MyRepoRoot)
In there, type azure site deploymentscript --aspWAP PathToMyProject\MyProject.csproj -s PathToMySolution.sln (obviously adjust the paths as needed)
This will create the .deployment and deploy.cmd files
Now edit the deploy.cmd file, find the line starting with %MSBUILD_PATH% (will be just one)
Insert the /t:Build parameter. For example:
[Before] %MSBUILD_PATH% <blah blah> /verbosity:m /t:pipelinePreDeployCopyAllFilesToOneFolder
[After] %MSBUILD_PATH% <blah blah> /verbosity:m /t:Build /t:pipelinePreDeployCopyAllFilesToOneFolder)
Push to GIT (check the GIT output if everything went ok)
Browse to your website and confirm it works!
I'll be glad when it's fixed in the next revision so we won't maintain the build script
We have a gem that contains shared code for multiple apps. It is hosted on a private github repo.
I want each app to automatically grab the latest version of that gem every time bundle install is run, so it is easy for the other members of my team to always be up to date, as well as having a simple deployment on Heroku and our Jenkins CI server.
It is my understanding that when bundle install is run, if some version of the gem has already been successfully installed, it will be used instead of any newer version.
Is there any way to force bundler to always get the latest version of the gem?
Do we just need to make bundle update a regular part of our workflow when we deploy or push to master (triggering a Jenkins run)?
As you said, I believe the update command seems like a better fit for what you are trying to achieve, since you can force the private gem update without affecting unrelated gems.
bundle update mygem
Per the bundle-update man page:
Update the gems specified ... ignoring
the previously installed gems specified in the Gemfile.lock.
In your dev environment you could create a bash or other script for running this in tandem with a standard bundle install.
As far as Heroku deploys, once you have updated and committed your Gemfile.lock changes to your git repo, Heroku should use that version, per their docs:
Gemfile.lock ensures that your deployed versions of gems on Heroku match the version installed locally on your development machine.
I'm working in local branch and want to try my changes on staging server, but I don't want to commit these changes. Can I commit local changes.
I know about deploy:upload recipe. I need a way to deploy several files or whole working derictory.
Thanks.
Most important of capistrano is to allow execute code on remote server, what we call deploy is a set of default scripts that do a lot of small tasks required for setting up new version of application on server.
So it is possible to write your own scrip that will execute following script (it's not working probably):
pack sources
system "tar -czf /tmp/package.tgz *"
upload it to server
upload "/tmp/package.tgz" "/tmp/package.tgz"
remove old files, unpack sources on server
run "cd /app_path/; rm -rf *; tar -xzf /tmp/package.tgz"
override (force recursively symlinks) files with some server configs ... like database.yml
run "cp -flrs /app_shared_path/* /app_path/"
restart application - this is for passenger, use your own server command for restart
run "cd /app_path/; touch tmp/restart.txt"
I did similar setup once for deployment - before I got access to git.
I deploy some cached (minified, etc) javascript files from a rails app. The simplest way is just to do this in a capestrano task:
top.upload("public/javascripts/cache", "#{current_path}/public/javascripts/cache")
This will use scp to upload the entire 'cache' directory.