Running EC2 Workloads
With SIMFRA_DOCKER=true, EC2 instances run as Docker containers. Each instance gets a container with the appropriate OS image, user data execution, VPC network attachment, and IMDS-based credential discovery.
Prerequisites
- Simfra running with
SIMFRA_DOCKER=true - Docker daemon accessible
- A bootstrapped account with VPCs and subnets (see Bootstrapping)
How It Works
When RunInstances is called, Simfra:
- Creates instance records with the correct state machine (pending -> running)
- Resolves the AMI ID to a Docker image
- Creates a Docker container on the instance's VPC network
- Injects region and credential environment variables
- Executes user data as a startup script inside the container
Instances are Docker containers, not virtual machines. They run a Linux init system and can execute shell commands, install packages, and run services - but they do not boot a full OS kernel.
AMI to Docker Image Mapping
Simfra maps AMI IDs to Docker images based on the AMI name pattern:
| AMI Name Pattern | Docker Image | Override Variable |
|---|---|---|
amzn2-* |
amazonlinux:2 |
SIMFRA_EC2_IMAGE_AMAZONLINUX2 |
al2023-* |
amazonlinux:2023 |
SIMFRA_EC2_IMAGE_AMAZONLINUX2023 |
ubuntu* |
ubuntu:latest |
SIMFRA_EC2_IMAGE_UBUNTU |
debian* |
debian:latest |
SIMFRA_EC2_IMAGE_DEBIAN |
| (default) | amazonlinux:2023 |
SIMFRA_EC2_DEFAULT_IMAGE |
Custom AMIs registered with RegisterImage can specify an explicit Docker image via the DockerImage field, which takes priority over name-based matching.
IMDS (Instance Metadata Service)
When an IAM instance profile is attached, Simfra provides an IMDS endpoint that the AWS SDK discovers automatically. The AWS_EC2_METADATA_SERVICE_ENDPOINT environment variable is injected into the container, pointing to Simfra's IMDS handler.
IMDS serves the standard metadata paths:
| Path | Example Value |
|---|---|
/latest/meta-data/instance-id |
i-0abc123def456 |
/latest/meta-data/ami-id |
ami-0abc123 |
/latest/meta-data/instance-type |
t3.micro |
/latest/meta-data/placement/availability-zone |
us-east-1a |
/latest/meta-data/local-ipv4 |
10.0.1.15 |
/latest/meta-data/public-ipv4 |
54.1.2.3 |
/latest/meta-data/iam/security-credentials/<role> |
Temporary credentials JSON |
Code running on the instance uses the standard SDK credential chain without any modification:
import boto3
# SDK discovers IMDS endpoint via AWS_EC2_METADATA_SERVICE_ENDPOINT
# and retrieves credentials from the instance profile automatically
s3 = boto3.client('s3', region_name='us-east-1')
s3.list_buckets()
When no instance profile is attached, Simfra injects static root credentials (AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY) as a fallback.
User Data
User data scripts execute when the container starts. Provide user data as a base64-encoded string:
aws ec2 run-instances \
--image-id ami-0abc123 \
--instance-type t3.micro \
--subnet-id subnet-abc123 \
--user-data "$(base64 <<'SCRIPT'
#!/bin/bash
yum install -y httpd
echo "Hello from Simfra" > /var/www/html/index.html
systemctl start httpd
SCRIPT
)"
User data is decoded and passed to the container runtime, which executes it as a shell script on startup. This works the same way as cloud-init on real EC2 instances for simple shell scripts.
VPC Networking
Each EC2 instance is attached to its VPC's Docker network. With VPC isolation enabled (the default when Docker is on):
- Instances in the same VPC can communicate by private IP
- Instances resolve DNS names via Simfra's DNS container (Route53 zones, service endpoints)
- Private instances are not reachable from the host
- Public instances with Elastic IPs have ports forwarded to the host
# Launch in a specific subnet
aws ec2 run-instances \
--image-id ami-0abc123 \
--instance-type t3.micro \
--subnet-id subnet-abc123 \
--security-group-ids sg-abc123 \
--iam-instance-profile Name=my-instance-profile
SSM Session Manager
Connect to running instances through SSM Session Manager via the web console. This provides a terminal session inside the container without SSH.
SSH is not available because instances are Docker containers, not VMs with a full sshd process.
Example: Web Server with IAM Instance Profile
This example launches an instance that installs a web server and reads configuration from S3.
# Create an IAM role for the instance
aws iam create-role \
--role-name web-server-role \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
aws iam attach-role-policy \
--role-name web-server-role \
--policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
# Create instance profile and attach role
aws iam create-instance-profile --instance-profile-name web-server
aws iam add-role-to-instance-profile \
--instance-profile-name web-server \
--role-name web-server-role
# Upload config to S3
aws s3 cp config.json s3://my-configs/web-server/config.json
# Launch the instance with user data
aws ec2 run-instances \
--image-id ami-0abc123 \
--instance-type t3.micro \
--subnet-id subnet-abc123 \
--iam-instance-profile Name=web-server \
--user-data "$(base64 <<'SCRIPT'
#!/bin/bash
yum install -y httpd aws-cli
# SDK discovers credentials via IMDS automatically
aws s3 cp s3://my-configs/web-server/config.json /etc/web-server/config.json
# Start the web server
systemctl start httpd
SCRIPT
)"
Limitations
- No SSH access - use SSM Session Manager instead
- No full kernel - instances run as containers, not VMs. Kernel modules, custom networking (iptables rules), and systemd-dependent services may not work as expected
- Instance types are advisory - resource limits (memory, CPU) are mapped from the instance type, but the mapping is approximate
- State transitions are simulated - instances go through pending -> running -> stopping -> stopped -> terminated, but the timing is controlled by Simfra's worker interval, not real boot times
Next Steps
- Endpoint Discovery - how EC2 instances discover the Simfra endpoint via IMDS
- VPC Isolation - how VPC networks affect EC2 container connectivity
- Writing ECS Tasks - running containerized workloads in ECS