cf push - how can I update/push selected files (modified files) using CloudFoundry - ibm-cloud

Pushing the entire code every time is time consuming and it's not a good practice.
How can I perform incremental push? Is there a way?

Expanding on #jimmc comment:
Cloud Foundry clients (CLI, Java Client, etc) automatically do incremental push of application bits. Here's how it works:
When a CF client is given a directory to push, it gets a list of files in the directory and all subdirectories. When a client is given an archive (.jar, .war, .zip) to push, it explodes the archive locally on the client machine. Only the first level of the archive is exploded, any embedded archives (e.g. .jar files in a .war file) are not exploded. It then gets a list of files in the exploded archive.
The client then calculates a SHA for each file and sends the list of files with SHAs to the CF resource matching API. CF will respond with a list of files that it already has (e.g. from a previous push). The client then sends only the files that CF doesn't already have.

push should be capable of sync:
$ cf p -h
NAME:
push - Push a new app or sync changes to an existing app
However By default, cf push recursively pushes the contents of the current working directory.
Note: If you want to push more than a single file, but not the entire
contents of a directory, consider using a .cfignore file to tell cf
push what to exclude
Example .cfignore file contents:
tmp/
log/
my_unnecessary_file.txt
When executing your next cf push for deploying the application it will omit the files and directories listed in your .cfignore file.
Regards

Related

How to clean up and create a Archiving strategy for a folder on Github

I have
A git repository
A folder on this repository
To this folder I upload .SQL files. A new DDL is a new .SQL file and it is uploaded to the same folder as this is the place from which a CICD process kicks off to act upon this new file. I do change the sql code now and then but have no use for them after a certain point as it gets executed to the ultimate database via Liquibase
The Problem
Over time this folder now has close to 5000 .SQL files and growing everyday
Its getting cumbersome to navigate and find anything in this folder
The CICD build out of this folder is taking a lot of time it zips the entire folder
I want to
Archive/Remove everything more than 3 months old from the main folder
Move the old files to an Archived location so that I can refer to them
Get the file count down to a manageable level
Possibly do the archiving in a automated way without manual intervention
I Do not want to
Delete the files as I have to maintain a history
Change the process as may be have only one sql file and keep changing it.
I do not want Delete the files as I have to maintain a history
And yet, this is the simplest solution, as you still list the history of a deleted file.
# filter the deleted file to find one:
git log --diff-filter=D --summary | grep pattern_to_search
# Find the log of a deleted file:
git log --all -- FILEPATH
That means your process would simply:
list all files older than a certain date
add their name to a "catalog" file (in order to query them easily later on)
delete (git rm) them (their history is still there
For any file present in the "catalog" file, you still can check their log with:
git log -- a/deleted/file

How do I add a file in a subfolder to a new repository?

I have a repository for a website and it has two separate remotes. One is for the website files and one for datasets and R scripts to make some data in my blog posts reproducible and archived for the future.
My local file structure looks like this.
-Website
|
|--website-files/posts/blog-post1
|/blog-post2
|r_script.R
The folder Website has two remotes one - origin - for the website, and one - blog-post - for the dumping ground for my replication files.
So, because I have cleanly added a second remote, I tried to add the file r_script.R and push it to the remote blog-post.
git add website-files/posts/r_script.R
Then, though, when I check the status, git status shows the file name as untracked listed as
../../r_script.R
The precise question: How do I add a file in a subfolder to be tracked and then to push its own unique remote? Note, when I copy r_script.R to the folder Website, and run git add r_script.R it shows up as a staged file ready for committing.
But I would really rather keep it in the subfolder to keep it clean.
Maybe should I add the repo blog-post as a submodule to the subfolder website-files/posts/ or something like that?

Import existing file folder to IBM Watson Application

I have downloaded on my PC this IBM Watson project:
https://github.com/watson-developer-cloud/conversation-simple/
and following its tutorial I've uploaded it on my IBM dashboard.
The problem is that everytime I want to change somethink in the project I've to re-upload it with the command line command cf push.
When I go in the Toolchain section, I can't see all the files and their folders, but only create a new repository or clone it, but in both case I haven't resolved my problem.
How can I resolve this problem?
Try to use:
cf push APP_NAME
Obs.: cf push does not support incremental upload. It will simply push everything in the folder to the cloud.
If your node_modules in the folder which resulted in a large-size upload. Try to specify what to exclude in a .cfignore file.
Example .cfignore file contents:
tmp/
node_modules/
my_unnecessary_file.txt
When executing your next cf push for deploying the application it will omit the files and directories listed in your .cfignore file.
See more about CF PUSH.
See more deploy applications with CF CLI.
It doesn't change anything, because when you create a toolchain it ask you to select between:
- New
- Clone
- Fork
and I tried all these options but going in the web eclipse editor files don't appear.
This s the git repository of my app:
https://git.ng.bluemix.net/consultagiovanilepolizzi/official-app2
How can I import with Git an existing project in my computer to edit it with toolchain?
As it seems that you are working with the Continuous delivery, instead of uploading your app using cf push, you are going to use the toochain.
The fist step is to use the git repository, you need to create a ssh key or an access token, if you prefer to connect through https, the access token can be created in the Bluemix git, going into the settings of your user, then Access Tokens
After that, you can push your app to the repository using a git tool, the user name will be your IBM account, and the password your token.
In the delivery pipeline, the build stage checks you repository for new commits into the master, by default, and start a new build, sending to the deploy if successful.
Those links below can give you more details
Using git Repos
Setting up local clients
Delivery Pipeline

Why does .meteor have a .gitignore file?

I am creating a new meteor app and would like to put the whole thing under git source control. When cloning a working copy of my meteor directory, meteor gives : run: You're not in a Meteor project directory.
After inspecting the .meteor directory, I see that the files in here are being excluded in my local clone.
Is there a particular reason this is done?
as #Swadq already pointed about, the .meteor directory is Meteor's directory. It contains a folder and a file.
The local directory contains the compiled version of your application and some database information (lock-file and the actual raw data of mongodb). This of course should not be included in your VCS.
The package file contains all packages meteor should load for your application. This is of course important and must be included in your VCS. More importantly: this file is checked for to determine if the current directory is a meteor application. If you don't include this you'll loose the packages you relay on and the ability to simply run the app. using meteor.
So ideally your .gitignore file only should contain .meteor\local but not .meteor\packages. When using meteorite the .gitignore file should contain .meteor\meteorite as well.

Jenkins: FTP / SSH deployment, including deletion and moving of files

I was wondering how to get my web-projects deployed using ftp and/or ssh.
We currently have a self-made deployment system which is able to handle this, but I want to switch to Jenkins.
I know there are publishing plugins and they work well when it comes to uploading build artifacts. But they can't delete or move files.
Do you have any hints, tipps or ideas regarding my problem?
The Publish Over SSH plugin enables you to send commands using ssh to the remote server. This works very well, we also perform some moving/deleting files before deploying the new version, and had no problems whatsoever using this approach.
The easiest way to handle deleting and moving items is by deleting everything on the server before you deploy a new release using one of the 'Publish over' extensions. I'd say that really is the only way to know the deployed version is the one you want. If you want more versioning-system style behavior you either need to use a versioning system or maybe rsync that will cover part of it.
If your demands are very specific you could develop your own convention to mark deletions and have them be performed by a separate script (like you would for database changes using Liquibase or something like that).
By the way: I would recommend not automatically updating your live sites after every build using the 'publish over ...' extension. In case we really want to have a live site automatically updated we rely on the Promoted Builds Plugin to keep it nearly fully-automated but add a little safety.
I came up with a simple solution to remove deleted files and upload changes to a remote FTP server as a build action in Jenkins using a simple lftp mirror script. Lftp Manual Page
In Short, you create a config file in your jenkins user directory ~/.netrc and populate it with your FTP credentials.
machine ftp.remote-host.com
login mySuperSweetUsername
password mySuperSweetPassword
Create an lftp script deploy.lftp and drop it in the root of your .git repo
set ftp:list-options -a
set cmd:fail-exit true
open ftp.remote-host.com
mirror --reverse --verbose --delete --exclude .git/ --exclude deploy.lftp --ignore-time --recursion=always
Then add an "Exec Shell" build action to execute lftp on the script.
lftp -f deploy.lftp
The lftp script will
mirror: copy all changed files
reverse: push local files to a remote host. a regular mirror pulls from remote host to local.
verbose: dump all the notes about what files were copied where to the build log
delete: remove remote files no longer present in the git repo
exclude: don't publish .git directory or the deploy.lftp script.
ignore-time: won't publish based on file creation time. If you don't have this, in my case, all files got published since a fresh clone of the git repo updated the file create timestamps. It still works quite well though and even files modified by adding a single space in them were identified as different and uploaded.
recursion: will analyze every file rather than depending on folders to determine if any files in them were possibly modified. This isn't technically necessary since we're ignoring time stamps but I have it in here anyway.
I wrote an article explaining how I keep FTP in sync with Git for a WordPress site I could only access via FTP. The article explains how to sync from FTP to Git then how to use Jenkins to build and deploy back to FTP. This approach isn't perfect but it works. It only uploads changed files and it deletes files off the host that have been removed from the git repo (and vice versa)