Stackery is a serverless operations console for managing production serverless applications. It consists of a web-based operations dashboard and a command line (CLI) tool. These tools are used to manage production serverless applications throughout their lifecycle. In this introduction we’ll get familiar with common terms and concepts and learn how to get started managing distributed serverless applications using Stackery.
A Stack is a collection of cloud resources that compose a logical application, for example a Rest Api backend or a data processing pipeline. A stack’s architecture definition is stored in a
stack.json file in the stack’s git repository within your GitHub or AWS CodeCommit account. Modifications to stack architecture, for example the provisioning of a new Lambda function or database, are done through Stackery’s Web UI or by modifying the
stack.json file directly. New stack versions are deployed to one or multiple AWS accounts and regions with different configuration values via Environments, using Stackery’s deployment functionality.
A Node is a resource within a Stack. Nodes are shown as boxes in Stackery’s Web UI. Nodes come in a variety of types which represent various cloud provider resources such as AWS Lambda functions, API Gateway endpoints, Datastores (e.g. S3, Redis, MySQL, Postgres, DynamoDB), Network Infrastructure (e.g. Virtual Networks, CDNs, Load Balancers), and Docker Clusters. Some Node types have input and output Ports which are used in combination with Wires to subscribe to and publish events. A full list of Node types can be found in the Stackery API Reference.
A Wire is a connection between two nodes and is represented by a line between Nodes in Stackery’s Web UI. Wires are used to subscribe one Node to an event stream emitted by another Node. For example, a Function node’s input is commonly connected to a Rest API node’s output, which subscribes an AWS Lambda function to API Gateway HTTP events. This pattern can be used to subscribe Lambda functions to a wide variety of event sources such as S3, DynamoDB, API Gateway, Kinesis streams, cron timers, other Lambda functions, and unhandled exceptions.
Environments provide a mechanism to store environment specific configuration values such as database passwords, API keys, or application configuration, as well as deployment configurations (by region and AWS account). It’s typical to have a set of environments such as production, staging, and development. In some cases it’s useful to create individual development environments for each engineer. When an environment is created within Stackery, you specify a region and an AWS account. Stacks will be deployed into the AWS account and region associated with the environment.
During the deployment process the stack is configured to inject the appropriate environment configuration into your stack components. Function nodes can access environment configuration through an explict mapping of environment config values to runtime environment variables. Environment configuration values can also be used to control settings for other node types, for example the EC2 instance size of a Database node or the custom domain name to use for a Rest Api node. The deployment history for each environment can be seen in the operations dashboard UI.