I am working my way through an online course from IBM on getting to know Bluemix. The exercise shows how to push an application to Bluemix.
I am running:
cf version 6.18.1+a1103f0-2016-05-24 on a Mac running OS X El Capitan
10.11.5 (15F34)
This is the command I entered:
cf push leonardbMyFirstDeploy3 -c "node app.js" -m 128M --no-manifest --no-start
This is the error I am getting (I have substituted my user name and digits on Mobile Documents folder name).
*FAILED
Error processing app files in '/Users/myname': read
/Users/myname/Library/Mobile
Documents.###########/com~apple~TextInput/Dictionaries/.baseline/UserDictionary/SAlQVUhF7208e6_gvZx_zdKx1U1AzKGem3HO2pLKjgY=/baseline.zip:
bad file descriptor*
I checked the file and yes the file seems to be corrupted. As I understand, this directory is my local location for iCloud sharing on my local disk. I don't know how this dictionary file got there and probably don't need it.
But my questions are these:
For the cloud foundry push command is there a way to generate a trace to get more information?
Why would the push be even looking at or using the file that is giving it problems? This seems like a significant overreach to even be anywhere this this folder. So does anyone know why?
Can anyone advise how to fix this?
I did try to configure a .cfignore using the Mobile Documents.########## directory but this did not seem to change the outcome any as the error recurred.
If the push command gives you OK on certain steps are there any cleanup or rollback commands that need to be executed before running again?
When you run cf push without the -p option, it will recursively push everything in the current directory and under it. So if you were running in /Users/myname when you ran the push, it will have pushed everything underneath it. Try creating a separate directory which contains just your app files and push from there instead.
If you run the cf command without any arguments it will dump a bunch of usage info, including the following environment variable to control debug tracing:
CF_TRACE=true Print API request diagnostics to stdout
Finally you do not have to clean anything up explicitly. If you do as I suggested above and move just your application files into their own directory and run the push from there, it will simply overwrite what you did before.
Related
I have recently enrolled with Edx for the CS50x course. I have successfully completed my week 0 set. i am now struggling to setup the vscode/ codespaces/ github for the next psets. i have followewd all the all the steps as per the cs50 procedure together with the provided links however i keep getting different error messages such as "failed to save 'settings.json':unable to write file 'vscode-remote://codespaces....." another unable to save 'settings.json': the content of the file is newer. Please compare your version with the file contents or overwrite the content of the file with your changes. the terminal has no cursor and i cannot type anything or even paste anything on it. Also on the CLI there is normally the problem - terminal and output tabs only the other tabs(jupyter, ports and debug) that i have seen on videos i have watched are never there.
I have an app that creates a file temporarily, does not delete it. I was hopping to see the contents of the file while running.
The app is deployed using the local deployer, does any body knows where would it create the file??
I tried the temp path, and also the working directory where the out and error logs are... nothing, the app does seem to be erroring, that would be on my normal console log.
Running on unix, temp is at /tmp.
thanks
You can control this location via the local deployer property workingDirectoriesRoot and deleteFilesOnExit.
For more information, you can refer this doc:
https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-deployer
Actually looking at the code of the local deployer, it seems the location it defaults to is the system temp path (System.getProperty(“java.io.tmpdir”)) and adds the stream id, plus the app id, etc. It is the same folder where the console and error streams write to.
thanks!
I'm trying to use the Infectious Media Generator to practice some pen-testing with a USB. As I go through the process, after I put the port number I get this:
set:payloads> Port to connect back on [443]:443
[-] Generating fileformat exploit...
[*] Payload creation complete.
[*] All payloads get sent to the /root/.set/template.pdf directory
[!] Something went wrong, printing the error: name 'src' is not defined
I saw something that said to update, however when I run ./seupdate it erases everything about SET and says it needs a directory to specify where it's pulling information from. I initially tried routing to github but that didn't work.
There is also the issue that the user manual specifies using the ./set-update command however I can't find that executable anywhere in my directory.
I also tried running the command on SET's website to install SET but that didn't work either which is why I downloaded the .zip and extracted it. Anyone run into these errors?
I am new to Google Cloud. We have historically used AWS for online backups -- essentially, our local servers ran rsync to an EC2 instance at AWS and it all worked fine. I'm now trying to migrate from AWS to Google and of course the setup is pretty different. With gsutil rsync it looked to me as though I wouldn't need to spin up a Compute Engine at all, I could just push stuff straight into gs://aws_mnt bucket
Having installed the SDK on our AWS instance I was able to push all our backups to the gs://aws_mnt bucket very easily using gsutil cp -n
But going forward I want to run a cron job on the local server which uses rsync rather than cp for obvious reasons.
I have two issues:
Despite reading the appropriate documentation (here) I am so stupid I can't figure out how to permanently authorise the local server so I don't have to do gcloud auth login and get a code from a browser each session, as for a cron job that's not really going to work.
When I try to use gsutil rsync from the local server to the gs://aws_mnt bucket that was pre-populated from AWS, I get an error:
gsutil rsync /mnt/archive/backups gs://aws_mnt/kahless
Building synchronization state...
Skipping cloud sub-directory placeholder object gs://aws_mnt/kahless/
Starting synchronization
There is some discussion of this error on github and I've produced detailed output from
gsutil -D -m rsync /mnt/archive/backups gs://aws_mnt/kahless
But since this is a brand-new install of the SDK I can't imagine the thread hasn't already been dealt with so I must be doing something wrong?
Rus
In response to your questions:
Once you have configured credentials using gcloud auth, the 'gcloud auth login' command will cause them to be selected until you login to a different credential... and that state will persist and not require you to go through the browser session again unless/until you revoke those credentials. Note: If you're thinking of running commands from an unattended script (e.g., via cron) please consider using service account credentials. For more details please see https://developers.google.com/cloud/sdk/gcloud/#gcloud.auth
That "skipping..." message is not an error - it's just informing you that gsutil is skipping trying to download the placeholder object, because such objects aren't needed in (and would interfere with) directories in the local file system. I'll update the message in the next version of gsutil to make this more clear. So, what you saw was that the second run of gsutil rsync found nothing to do after comparing the source and destination, and completed normally.
We use an ETL process to pull data from Google Cloud Storage, but annoyingly it hangs everytime Google releases udpates to GSUtil, because it sits at a prompt asking if you want to update the library. Fine if you are doing this manually, but not cool when it's being run in an automated SSIS package, as jobs don't finish for days and you keep wasting time with the same stupid cause.
I thought I was going to be cleaver, and add "python gsutil update -n" to the top of the bash script I'm automating the building/execution of in my SSIS Package in the hope to curb this problem, but when I run this command from the prompt in either Windows Server 2008r2 or Windows 7 I get the following:
C:\gsutil>python gsutil update -f -n
Copying gs://pub/gsutil.tar.gz...
OSError: The process cannot access the file because it is being used by another process.
Any help?
P.S. - Also, Google engineers... can you PLEASE remove these prompts? for all of us using these tools in automated processes? I have other things to work on, instead of constantly going back to things like this every few days/weeks.
What version of gsutil are you running?
Also, to be clear: Are you talking about the fact that gsutil checks for available software updates periodically, and if it finds them it then prompts you whether you want to update? Or are you talking about the fact that the gsutil update command asks if you want to perform the update?
If the former, gsutil shouldn't be performing this check/prompting if you are running gsutil from a script not connected to at TTY. If that's not working correctly we'd like to know.
And also, if that's the problem you're having, you can completely disable automated software update checks by setting software_update_check_period=0 in the [GSUtil] section of your .boto config file.