S3 as a Serverless Service
Just as we did with DynamoDB in the last chapter, we’ll look at splitting S3 into a separate Serverless service. It should be noted that for our simple note taking application, it does make too much sense to split S3 into its own service. But it is useful to go over the case to better understand cross-stack references in Serverless.
In the example repo, you’ll notice that we have a uploads
service in the services/
directory. And the serverless.yml
in this service looks like the following.
service: notes-app-mono-uploads
custom:
# Our stage is based on what is passed in when running serverless
# commands. Or falls back to what we have set in the provider section.
stage: ${opt:stage, self:provider.stage}
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
resources:
Resources:
S3Bucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
-
AllowedOrigins:
- '*'
AllowedHeaders:
- '*'
AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketArn:
Value:
Fn::GetAtt:
- S3Bucket
- Arn
Export:
Name: ${self:custom.stage}-AttachmentsBucketArn
AttachmentsBucketName:
Value:
Ref: S3Bucket
Export:
Name: ${self:custom.stage}-AttachmentsBucket
Most of the Resources:
section should be fairly straightforward and is based on Part II of this guide. So let’s go over the cross-stack exports in the Outputs:
section.
-
Just as in the DynamoDB service, we are exporting the ARN (
AttachmentsBucketArn
) and the name of the bucket (AttachmentsBucketName
). -
The names of the exported values is based on the stage:
${self:custom.stage}-AttachmentsBucketArn
and${self:custom.stage}-AttachmentsBucket
. -
We can get the ARN by using the
Fn::GetAtt
function by passing in the ref (S3Bucket
) and the attribute we need (Arn
). -
And finally, we can get the bucket name by just using the ref (
S3Bucket
). Note that unlike the DynamoDB table name, the S3 bucket name is auto-generated. So while we could get away with not exporting the DynamoDB table name; in the case of S3, we need to export it.
Now that we have the main infrastructure pieces created, let’s take a look at our APIs next. For illustrative purposes we are going to create two separate API services and look at how to group them under the same API Gateway domain.
If you liked this post, please subscribe to our newsletter, give us a star on GitHub, and follow us on Twitter.
For help and discussion
Comments on this chapterFor reference, here is the code so far
Mono-repo Backend Source