deploy zip to aws lambda automatically - deployment

I have zipped my source code using python and moved Zip file to S3 bucket. And how can I automatically deploy this zip file to my already existing Lambda function.
could you please give an idea on this.
Thanks in advance.

first install serverless.
npm install -g serverless
check this repo for examples. I am providing a simple python lambda function example. serverless examples
You can reference your lambda function from the files and also create necessary roles and invoke permissions and mention your resources in serverless.yml.
To deploy the cloud formation script simply use below command from the directory of serverless.yml file
serverless deploy
To delete the resources you deployed simply use following command from serverless.yml file's directory.
serverless remove
This saves you a lot of time than creating your resources through console.
You can also see different examples of nodejs etc in that repo.

You can setup S3 to trigger a different lambda function whenever a code is uploaded in the s3 bucket and configure this lambda function to upload that zip in s3 to your desired lambda function.
If your usecase is you only have to do changes and update the code from bucket. You can use serverless instead of paying for another lambda function.
Serverless uses cloudformation underlyingly.
see this reference on how to setup a s3 trigger create s3 trigger. Write your logic using boto3 client in this triggered lambda to upload the code to other lambda.

Related

Where is a file created via Terraform code stored in Terraform Cloud?

I've been using Terraform for some time but I'm new to Terraform Cloud. I have a piece of code that if you run it locally it will create a .tf file under a folder that I tell him but if I run it with Terraform CLI on Terraform cloud this won't happen. I'll show it to you so it will be more clear for everyone.
resource "genesyscloud_tf_export" "export" {
directory = "../Folder/"
resource_types = []
include_state_file = false
export_as_hcl = true
log_permission_errors = true
}
So basically when I launch this code with terraform apply in local, it creates a .tf file with everything I need. Where? It goes up one folder and under the folder "Folder" it will store this file.
But when I execute the same code on Terraform Cloud obviously this won't happen. Does any of you have any workaround with this kind of troubles? How can I manage to store this file for example in a github repo when executing github actions? Thanks beforehand
The Terraform Cloud remote execution environment has an ephemeral filesystem that is discarded after a run is complete. Any files you instruct Terraform to create there during the run will therefore be lost after the run is complete.
If you want to make use of this information after the run is complete then you will need to arrange to either store it somewhere else (using additional resources that will write the data to somewhere like Amazon S3) or export the relevant information as root module output values so you can access it via Terraform Cloud's API or UI.
I'm not familiar with genesyscloud_tf_export, but from its documentation it sounds like it will create either one or two files in the given directory:
genesyscloud.tf or genesyscloud.tf.json, depending on whether you set export_as_hcl. (You did, so I assume it'll generate genesyscloud.tf.
terraform.tfstate if you set include_state_file. (You didn't, so I assume that file isn't important in your case.
Based on that, I think you could use the hashicorp/local provider's local_file data source to read the generated file into memory once the MyPureCloud/genesyscloud provider has created it, like this:
resource "genesyscloud_tf_export" "export" {
directory = "../Folder"
resource_types = []
include_state_file = false
export_as_hcl = true
log_permission_errors = true
}
data "local_file" "export_config" {
filename = "${genesyscloud_tf_export.export.directory}/genesyscloud.tf"
}
You can then refer to data.local_file.export_config.content to obtain the content of the file elsewhere in your module and declare that it should be written into some other location that will persist after your run is complete.
This genesyscloud_tf_export resource type seems unusual in that it modifies data on local disk and so its result presumably can't survive from one run to the next in Terraform Cloud. There might therefore be some problems on the next run if Terraform thinks that genesyscloud_tf_export.export.directory still exists but the files on disk don't, but hopefully the developers of this provider have accounted for that somehow in the provider logic.

Producing a CSV of Cloud Bucket files

What's the best way to create a CSV file listing images in a Google Cloud bucket to be imported into AutoML Vision?
If you want to listen the files that are saved on a bucket you can use a Google cloud function to listen the new files and create the csv file in another bucket
For example you can use this python code as starting point, this code log the details of a new uploaded file
def hello_gcs_generic(data, context):
"""Background Cloud Function to be triggered by Cloud Storage.
This generic function logs relevant data when a file is changed.
Args:
data (dict): The Cloud Functions event payload.
context (google.cloud.functions.Context): Metadata of triggering event.
Returns:
None; the output is written to Stackdriver Logging
"""
print('Event ID: {}'.format(context.event_id))
print('Event type: {}'.format(context.event_type))
print('Bucket: {}'.format(data['bucket']))
print('File: {}'.format(data['name']))
print('Metageneration: {}'.format(data['metageneration']))
print('Created: {}'.format(data['timeCreated']))
print('Updated: {}'.format(data['updated']))
Basically the function is listening the storage events "google.storage.object.finalize
" (this happen when a file is uploaded)
To deploy this function on the cloud you can use this command
gcloud functions deploy hello_gcs_generic --runtime python37 --trigger-resource [your bucket name] --trigger-event google.storage.object.finalize
or you can use the GCP console (Web UI) to deploy this function.
selecting "cloud storage" on the trigger field
select "Finalize/create" on the event type
specifiying your bucket
Even you can directly process the files using Auto ML within a cloud function as is mentioned in this example.

How to download multiple objects from IBM Cloud Object Storage?

I am trying to use IBM Cloud Object Storage to store images uploaded to my site by users. I have this functionality working just fine.
However, based on the documentation here (link) it appears as though only one object can be downloaded from a bucket at a time.
Is there any way a list of objects could all be downloaded from the bucket? Is there a different approach to requesting multiple objects from a COS bucket?
Via the REST API, no, you can only download a single object at a time. But most tools (like the AWS CLI, or Minio Client) allow downloading all objects that share a prefix (eg foo/bar and foo/bas). The IBM forks of the S3 libraries also are now integrated with Aspera, and can transfer large directories all at once. What are you trying to do?
According to S3 spec (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html), you can only download one object at a time.
There are various tools which may help to download multiple objects at a time from COS. I used AWS CLI tool to download and upload the objects from/to COS.
So install aws-cli tool and configure it by supplying access_key_id and secret_access_key here.
Recursively copying S3 objects to a local directory: When passed with the parameter --recursive, the following cp command recursively copies all objects under a specified prefix and bucket to a specified directory.
C:\Users\Shashank>aws s3 cp s3://yourBucketName . --recursive
for example:
C:\Users\Shashank>aws --endpoint-url http://s3.us-east.cloud-object-storage.appdomain.cloud s3 cp s3://yourBucketName D:\s3\ --recursive
In my case having endpoint based on us-east region and I am copying objects into D:\s3 directory.
Recursively copying local files to S3: When passed with the parameter --recursive, the following cp command recursively copies all files under a specified directory to a specified bucket.
C:\Users\Shashank>aws s3 cp myDir s3://yourBucketName/ --recursive
for example:
C:\Users\Shashank>aws --endpoint-url http://s3.us-east.cloud-object-storage.appdomain.cloud s3 cp D:\s3 s3://yourBucketName/ --recursive
I am copying objects from D:\s3 directory to COS.
For more reference, you can see the link here.
I hope it works for you.

Is it possible to use "Custom Sources and Sinks" to write/append file during Dataflow pipeline execution?

My program relies on local system storage to write a file that is being generated by the program itself. Hence executing the job in "DirectPipelineRunner" mode. Below is the flow,
One of my function - Makes multiple REST API requests and creates/appends to a file(Output.txt) in local system storage.
Pipeline: a) Upload generated file to GCS 2) Read the file from GCS c) Perform transformation d) Write to BigQuery.
Since, my program writes/appends API response to local system storage, I'm executing the pipeline in DirectPipelineRunner mode.
Is it possible to have temporary space in cloud to remove dependency on local file system So that I can execute the pipleline in DataflowPipelineRunner mode?
I guess Custom Sources and Sinks can be used here. Can someone add some light on this problem statement?

I cannot just deploy a function with Serverless-framework 1.20.2

I wanted to follow these tips
and just redeploy my function, as the serverless.yml had not been changed.
However, it just hangs on the Serverless: Uploading function stage. Forever, apparently.
The whole deploy (with sls deploy) works, though slowly.
How can debug this, as there is apparently no error message?
EDIT
When I use sls deploy my project takes about 4 min and 15s to deploy.
It seems rather long to me, so I thought I would use sls deploy function -f myFunction instead, which is supposed to be much faster.
However, when I try sls deploy function -f myFunction, it seems to just hang forever on Serverless: Uploading function: myFunction.
I have no idea how to debug that.
It seems using 'verbose', with Serverless: Uploading function: myFunction --verbose does not make a difference, the messages returned are the same.
I will try to wait and see if, eventually, the function deploy completes...
Well, I waited, and it doesn't: after about 8 min 30s I get the following error message:
Serverless Error ---------------------------------------
Connection timed out after 120000ms
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Forums: forum.serverless.com
Chat: gitter.im/serverless/serverless
Your Environment Information -----------------------------
OS: linux
Node Version: 7.10.0
Serverless Version: 1.20.2
Another oddity: when hanging, it reads:
Serverless: Uploading function: myFunction (12.05 MB)...
But the function itself is just 3.2 kB, and does not include any packages.
When I use sls deploy, the size displayed is the same:
Serverless: Uploading service .zip file to S3 (12.05 MB)...
What could be wrong with my function deploy?
EDIT 2
As #dashmug hinted, there is a config issue in serverless.yml.
In the functions dir of my serverless project, I would like to have a common package.json and node_modules. Then each function could import modules as needed.
I tried to follow the official guide.
My serverless.yml is like so:
functions:
myFunction:
package:
exclude:
- 'functions/node_modules/**'
- '!functions/node_modules/module1_I_want_to_include/**'
- '!functions/node_modules/module2_I_want_to_include/**'
Now I get, with sls deploy:
Serverless: Uploading service .zip file to S3 (31.02 MB)...
and the function works :)
However, with sls deploy function -f myFunction, I get:
Serverless: Uploading function: dispatch (1.65 MB)...
It does upload in a reasonable time, but the function now gives the following error:
Unable to import module 'functions/myFunction': Error
Things I would look at:
Try comparing what happens between the two:
$ SLS_DEBUG=true sls deploy --verbose
and
$ SLS_DEBUG=true sls deploy function -f myFunction --verbose
Check your serverless config (packaging, etc.) against your project structure. One red flag is that the function deploy is as big as the service deploy. This could be a misconfiguration problem.
Use serverless package to see how the package(s) are zipped. It can provide some clues.
Are you using any plugins which may have altered the way your package is created?
How many node_modules directory do you have? Do you have only one for the entire service or one for each function?
You can make the deploy process more verbose by passing the --verbose argument to the deploy function.
Either sls deploy --verbose or sls deploy -v will do the trick.
I wasn't able to figure out why function deployment (as opposed to service deployment) would hang. I may have misconfigured my serverless.yml file.
But no big deal: I can do without sls deploy function -myFunction.
Because my expectations were wrong. I thought deploying a function would be way faster than deploying a service, by somehow not redeploying the node_modules directory.
But there is no partial function deployment in AWS: when a function is deployed, all necessary node modules must be deployed as well for the function to work.
As explained in serverless doc:
The Framework packages up the targeted AWS Lambda Function into a zip file.
The Framework fetches the hash of the already uploaded function .zip file and compares it to the local .zip file hash.
The Framework terminates if both hashes are the same.
That zip file is uploaded to your S3 bucket using the same name as the previous function, which the CloudFormation stack is pointing to.
I had (naively) hoped that only the updated handler would be uploaded to S3.
But as the function is packaged before deployment, it does need all of its modules and dependencies.
So the way I see it, function deployment would save time (as opposed to service deployment) only if the service has multiple functions, and the service functions do not use many common nodejs modules. And if sls deploy function -f myFunction does not hang, that is :)
So to increase development speed, the trick is to use offline emulation with a tool like serverless offline
serverless offline provides a local server, and lambda function myFunction becomes accessible locally, by calling http://localhost:3000/myFunction in Postman or the browser
In most cases, sls deploy can be called only once, after the handler has been thoroughly tested offline.