Package and upload an AWS lambda with its minimal dependencies
npm install aws-lambda-upload

> Package node.js code for AWS lambda with its minimal dependencies.
This module allows you to have node.js files for AWS Lambda fuction alongside other code, and makes it
easy to package a lambda function with only those dependencies that it needs. You can them update
a lambda directly, or prepare the packaged code in local or S3 zip archive, including for use with
CloudFormation.
```
npm install --save-dev aws-lambda-upload
``
$(npm bin)/aws-lambda-upload [options]
Here, is the path of the JS file to serve as the entry point into the Lambda. Note that in all cases, you'll use the basename of as the filename to use for Lambda handler.
#### Update existing lambda
Use --lambda flag to update a Lambda with the given name that you have previously created on AWS (e.g. using AWS Lambda
console).
Available programmatically as updateLambda(startPath, lambdaName, options).
#### Saving a local zip file
Use --zip flag to save the packaged lambda code to a zip file. It may then be used with e.g. aws lambda update-function-code command or as in a CloudFormation template with aws cloudformation package command.
Available programmatically as packageZipLocal(startPath, outputZipPath, options).
#### Saving a zip file to S3
Use --s3 to save the packaged lambda code to S3, and print the S3 URI to stdout.
The zip file will be saved to the bucket named by --s3-bucket flag (defaulting to "aws-lambda-upload"),--s3-prefix
and within that to folder (prefix) named by flag (defaulting to empty). The basename of theaws cloudformation package
file will be its MD5 checksum (which is exactly what does), which avoids
duplication when uploading identical files.
Available programmatically as packageZipS3(startPath, options).
#### Package for CloudFormation template
Use --cfn flag to interpret as the path to a CloudFormation template (.json or .yml file), package
any mentioned code to S3, replace with S3 locations, and output the adjusted template as JSON to (- for stdout).
This is similar to aws cloudformation package command. It will process the following keys in the template:Resource
* For with Type: AWS::Lambda::Function, processes Code property.Resource
* For with Type: AWS::Serverless::Function, processes CodeUri property.
In both cases, if the relevant property is a file path, interprets it as a start JS file,
packages it with packageZipS3() and replaces the property with S3 information
in the format required by CloudFormation. If file path is relative, it's interpreted relative to the directory of the template.
Available programmatically as cloudformationPackage(templatePath, outputPath, options)
If your entry file requires other files in your project, or in node_modules/,
that's great. All dependencies will be collected and packaged into a temporary zip file.
Note that it does NOT package your entire directory or all of node_modules/.require()
It uses collect-js-deps
(which uses browserify) to examine the callsnode_modules/
in your files, and recursively collects all dependencies. For files in, it also includes any package.json files as they affect the
import logic.
Actually, all browserify options are supported, by including them after -- on the command line
( should come before that).
Since the main file of a Lambda must be at top-level, if is in a subdirectorylib/my_lambda.js
(e.g. ), a same-named top-level helper file (e.g. my_lambda.js) will be added
to the zip archive for you. It's a one-liner that re-exports the entry module to let you use it
as the Lambda's main file.
#### Supports TypeScript!
With --tsconfig , you may specify a path to tsconfig.json or to the directory containing it,
and typescript dependencies will be compiled to JS and included. You'll have to have
tsify installed.
It is a convenience shortcut for including the tsify browserify plugin,
and is equivalent to including this browserify option -- -p [ tsify -p to collect-js-deps.
To be able to update Lambda code or upload anything to S3, you need sufficient permissions. Read about
configuring AWS
credentials
for how to set credentials that AWS SDK can use.
To use --lambda flag, the credentials you use need tolambda:UpdateFunctionCode
at least give you the permission of for thearn:aws:lambda:
resource of .
Read more here.
To use --s3 or --cfn flags, the credentials need to give you the permission to list and create objects in the relevant S3 bucket.aws-lambda-upload
E.g. the following policy works for the default bucket used by :
Suggested IAM Policy for default S3 bucket
``
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::aws-lambda-upload"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
],
"Resource": [
"arn:aws:s3:::aws-lambda-upload/*"
]
}
]
}
Before you run tests for the first time, you need to set up
localstack. You can do it with
``
npm run setup-localstack
Note that localstack has a number of requirements.
Once set up, you can run tests with npm test` as usual.