Skip to main content

Recently, I encountered an issue where my local Docker environment refused to connect to AWS S3, although everything worked seamlessly in AWS-managed environments. This challenge was not just a technical hurdle; it was a crucial bottleneck that needed resolution to ensure smooth Drupal deployments across various AWS environments (dev, staging and production).

Integrating AWS S3 into a local Docker container running Drupal 10 is more than just a technical setup; it’s a transformation that will enhance your website’s scalability and performance by leveraging cloud storage for file management. In this article, I’ll guide you through the entire process, from setting up your AWS S3 bucket to configuring your Docker-based Drupal environment. Whether you’re struggling with similar issues or planning ahead, this guide aims to equip you with the knowledge to streamline your development and deployment process efficiently.

 

Update Drupal's configuration

Before integrating AWS S3 into your Drupal environment, it's crucial to update your Drupal's configuration file to accommodate the S3 storage settings. This setup will enable Drupal to handle file management through the S3 bucket, thus offloading storage demands from your local server to the cloud.

Locate your settings.local.php file, found in the sites/default directory. If you haven't created a settings.local.php file, it's a good idea to do.  As this file is used for local development settings and is ideal for customising environment-specific configurations without affecting the production settings in settings.php.

Edit settings.local.php to include the following AWS S3 configuration under the Flysystem settings block. This snippet configures Drupal to use S3 as a file system backend, specifying parameters like the access key, secret key, and the S3 bucket details. Make sure to replace {TOCOME} placeholders with your actual AWS credentials and details later in the process:

// Flysystem S3 config
$schemes = [
  's3' => [
    'driver' => 's3',
    'config' => [
      'key' => '{TOCOME}',  // AWS access key
      'secret' => '{TOCOME}',  // AWS secret key
      'token' => '{TOCOME}',  // AWS session token
      'region' => '{REGION}',  // AWS region
      'bucket' => 'local-dev-flysystem-s3',  // S3 bucket name
      'options' => [
        'ACL' => 'public-read',
        'StorageClass' => 'REDUCED_REDUNDANCY'
      ],
      'protocol' => 'https',
      'public' => TRUE
    ],
    'serve_js' => TRUE,
    'serve_css' => TRUE,
    'cache' => TRUE  // Enables metadata cache
  ]
];
$settings['flysystem'] = $schemes;

This configuration integrates Flysystem, a filesystem abstraction library for PHP, with your Drupal instance. Flysystem provides a unified interface to handle files across various storage backends, including AWS S3. The public option ensures files are accessible over the web, while cache enhances performance by caching metadata.

After updating the configuration, the next steps will involve securing your AWS credentials to complete the setup, ensuring your Drupal application can securely communicate with your S3 bucket.

 

Obtain AWS credentials

To interact with AWS S3, you must have valid AWS credentials, which include an Access Key ID, a Secret Access Key, and a Session Token. These credentials allow your Drupal site running in a Docker container to securely access and manage files in the AWS S3 bucket.

 

Using PowerShell to set environment variables

For developers working on Windows or using PowerShell, setting environment variables is a straightforward process. Open your PowerShell terminal and enter the following commands to set up the AWS credentials as environment variables. These variables will be used by AWS CLI and other tools that require AWS access:

$Env:AWS_ACCESS_KEY_ID="your_access_key_id"
$Env:AWS_SECRET_ACCESS_KEY="your_secret_access_key"
$Env:AWS_SESSION_TOKEN="your_session_token"

Make sure to replace "your_access_key_id", "your_secret_access_key", and "your_session_token" with the actual values you obtain from your AWS account.

Obtaining credentials from AWS

If you do not already have AWS credentials, you can generate them by working through the following:

  1. Log in to your AWS Management Console
  2. Navigate to the IAM (Identity and Access Management) section
  3. Under Users, select your user account or create a new user with appropriate permissions to access the S3 service
  4. Under the Security credentials tab for the selected user, click on 'Create access key'
  5. Download the credentials or copy them directly from the interface

It's important to handle these credentials securely. Never commit them into version control systems or leave them exposed in your application's code.

export AWS_ACCESS_KEY_ID="your_access_key_id"
export AWS_SECRET_ACCESS_KEY="your_secret_access_key"
export AWS_SESSION_TOKEN="your_session_token"

 

Security considerations

Always ensure that the IAM user whose credentials you are using has the minimum necessary permissions for the operations your Drupal site needs to perform. This practice, known as the principle of least privilege, helps minimise potential security risks.

With your AWS credentials set up in your development environment, you're now ready to proceed with assuming an AWS role to further secure and scope access to your resources.

 

Setting environment variables on other systems

If you are not using PowerShell or are on a different operating system like macOS or Linux, you can set environment variables in your terminal.  Review your AWS screen.

 

Assume an AWS role

Assuming an AWS role is crucial for providing your Dockerised Drupal application with the necessary permissions to interact with the S3 bucket without hardcoding permanent credentials. This practice enhances security by using temporary credentials that limit access scope and duration.

 

Why assume a role?

  • Security: Limits permissions to the specific requirements of the session, reducing the risk of unauthorised access
  • Flexibility: Allows different parts of your application to assume different roles based on their specific needs, such as reading from one bucket and writing to another
  • Compliance: Ensures that actions taken by the software can be tracked to a specific role assumption event, aiding in audits.

 

How to assume a role using AWS CLI

Open your terminal where AWS CLI is installed and use the following command to assume a role. Ensure that your AWS credentials (previously set as environment variables) are active:

aws sts assume-role --role-arn "arn:aws:iam::account-id:role/role-name" --role-session-name "DescriptiveSessionName"
  • role-arn: Replace "account-id" and "role-name" with the actual AWS account ID and the role you intend to assume
  • role-session-name: This is an identifier for the session and can be any descriptive name that makes sense for your application
     

If the role you're trying to assume is associated with the S3 bucket you configured earlier, the command might look like this:

aws sts assume-role --role-arn "arn:aws:iam::0123456789012:role/local-dev-flysystem-s3" --role-session-name "AWSCLI-Session"

 

Handling the response

This command will output JSON containing temporary security credentials:

{
   "Credentials": {
       "AccessKeyId": "temporary_access_key_id",
       "SecretAccessKey": "temporary_secret_access_key",
       "SessionToken": "temporary_session_token",
       "Expiration": "expiration_datetime"
   },
   "AssumedRoleUser": {
       "AssumedRoleId": "role_id",
       "Arn": "role_arn"
   }
}

 

Utilising the credentials

Update your environment variables or directly use these credentials in your application's AWS SDK or CLI commands.
Ensure your application's AWS SDK configuration is set to refresh these credentials before they expire.

aws sts assume-role --role-arn "arn:aws:iam::0123456789012:role/local-dev-flysystem-s3" --role-session-name AWSCLI-Session

This command should respond with:

{
    "Credentials": {
        "AccessKeyId": "your_access_key_id",
        "SecretAccessKey": "your_secret_access_key",
        "SessionToken": "your_session_token",
        "Expiration": "2024-05-24T07:09:24+00:00"
    },
    "AssumedRoleUser": {
        "AssumedRoleId": "your_role_id",
        "Arn": "arn:aws:sts::0123456789012:assumed-role/local-dev-flysystem-s3/AWSCLI-Session"
    }
}

 

Test S3 bucket access

Once you have assumed an AWS role and obtained temporary security credentials, it's essential to validate that these credentials allow your application to interact with the specified S3 bucket. This step ensures that the role has the necessary permissions and that your system is correctly configured.

 

Testing access with AWS CLI

Use the AWS Command Line Interface (CLI) to test access to the S3 bucket. The following command attempts to list the contents of your S3 bucket:

aws s3 ls s3://local-dev-flysystem-s3

Ensure that your terminal session has the temporary credentials active. If you've set them as environment variables, the AWS CLI will use these automatically.

If the assumed role has the correct permissions, you should see a list of files and directories in the bucket, if any are present. An empty list indicates that the bucket is accessible but currently has no files, which is also a successful outcome.

 

Handling common errors

ExpiredToken error

If you receive an error message stating 'An error occurred (ExpiredToken) when calling the ListObjectsV2 operation: The provided token has expired.', this indicates that your session token has expired.  To resolve this, you must assume the AWS role again to generate new credentials, as outlined in the previous steps.

Permission errors

If you encounter permissions-related errors such as Access Denied, it suggests that the assumed role does not have sufficient permissions to perform the requested operation.  Review the IAM role's permissions in the AWS Management Console, ensuring it includes policies that allow listing and accessing objects in the specific S3 bucket.

 

Verifying and troubleshooting

Ensure correct configuration: Double-check the bucket name and region in your commands.  Check AWS IAM Policies: Verify that the IAM policies attached to the role include permissions for actions like s3:ListBucket and s3:GetObject.

Console Check: Log in to the AWS Management Console and manually try accessing the S3 bucket using the same role to confirm permissions.

Once you have confirmed that your Docker Drupal setup can access the S3 bucket, you are ready to proceed to the final step of integrating these settings into your Drupal configuration and restarting your Docker environment to apply the changes. This step is crucial for ensuring your Drupal instance can effectively use S3 for file storage.

 

Update Drupal configuration

After successfully testing the S3 bucket access, the next crucial step is to integrate the temporary security credentials into your Drupal environment. This involves updating your settings.local.php file with the credentials obtained from the assume-role command.

 

Update the configuration file

Using the credentials extract earlier, we now will apply these to the settings.local.php file.  Navigate back to your settings.local.php (where this article started) file in the sites/default directory of your Drupal installation.

Replace the {TOCOME} placeholders in the S3 configuration section with the actual values of the temporary credentials.  I've added the region 'ap-southeast-2' too

// Flysystem S3 config
$schemes = [
 's3' => [
   'driver' => 's3',
   'config' => [
     'key' => 'your_access_key_id',  // Temporary Access Key ID from assumed role
     'secret' => 'your_secret_access_key',  // Temporary Secret Access Key from assumed role
     'token' => 'your_session_token',  // Temporary Session Token from assumed role
     'region' => 'ap-southeast-2',  // AWS region of the S3 bucket
     'bucket' => 'local-dev-flysystem-s3',  // Name of your S3 bucket
     'options' => [
       'ACL' => 'public-read',
       'StorageClass' => 'REDUCED_REDUNDANCY'
     ],
     'protocol' => 'https',
     'public' => TRUE
   ],
   'serve_js' => TRUE,
   'serve_css' => TRUE,
   'cache' => TRUE
 ]
];
$settings['flysystem'] = $schemes;

Considerations for using temporary credentials

  1. Credential expiration – temporary credentials provided by AWS have an expiration time. Ensure your application or development environment can handle the automatic refresh of these credentials without manual intervention
  2. Security best practices – avoid using permanent AWS credentials in your development files. Temporary credentials reduce risk by limiting the timeframe for potential exposure
  3. Error handling – implement error handling in your application to gracefully manage the expiration of temporary credentials. This might include logging warnings and automatically attempting to re-assume the role.

After updating your settings.local.php, you will need to test these changes.  Clear Drupal's cache - Drupal caches configuration and other data, which might prevent new settings from taking immediate effect. Clear the cache through Drupal's administrative interface or by using Drush command

drush cr

You can verify this functionality by performing actions in Drupal that interact with the file system, such as uploading a file, to ensure that the S3 integration works as expected.

 

Restart Docker

The final step in integrating AWS S3 with your Drupal 10 environment running on Docker is to restart the Docker container. This ensures that all the new configurations and settings are loaded and that your Drupal instance can interact with the S3 bucket using the updated credentials.

How to restart your Docker container

Identify the container ID – before you can restart the container, you need to know its ID. You can find this by listing all running containers with the following command:

docker ps

This command will display all active containers along with their IDs and other details. Locate the ID of the container running your Drupal instance.

To restart the container use the following command to restart the Docker container. Replace [container_id] with the actual ID you obtained from the previous step:

docker restart [container_id]

This command will stop the container and start it again, applying any configuration changes made.

Verifying the restart by checking the container logs.  After restarting the container, check the logs to ensure that it starts up without any errors:

docker logs [container_id]

Test the Drupal site using a web browser and check to see if it is functioning correctly. Perform some file-related operations, such as uploading an image or a document, to verify that the S3 integration is working as expected.

 

Troubleshooting common issues

  1. Configuration errors: If the Docker container fails to start, check your settings.local.php for any syntax errors or incorrect configurations
  2. Connection issues: Ensure that the container has internet access if it is unable to connect to AWS services
  3. Permission problems: If Drupal cannot interact with the S3 bucket, recheck the IAM role permissions and ensure the AWS credentials are correct and valid.
     

By configuring your local Drupal 10 instance to use AWS S3 through Docker, you achieve several benefits such as scalability, reliability and performance.

Scalability – AWS S3 provides scalable cloud storage, handling high loads and large amounts of data efficiently.

Reliability – S3 offers high durability and availability, ensuring that your data is safe and always accessible.

Performance – offloading static content to S3 can reduce the load on your Docker servers and improve the overall performance of your site.
With these steps completed, your Drupal 10 instance should be fully integrated with AWS S3, enhancing its capabilities and preparing it for scalable and efficient operation. This setup is ideal for development and can be adapted for production environments with appropriate security measures.

 

The wrap

Successfully integrating AWS S3 with a local Docker environment running Drupal 10 is a transformative process that boosts your application’s scalability, reliability, and overall performance. Throughout this comprehensive article, we walked through each critical step, from configuring Drupal to handle AWS S3 storage, securing and managing AWS credentials, to ensuring robust security practices with role assumption and environmental isolation.

This journey began with a challenging issue of discrepancies between local and AWS-managed environments, leading to a deep dive into the configuration and management of AWS services directly from a Dockerised Drupal setup. We addressed each technical hurdle, ensuring that by the end of this tutorial, you are equipped not only with a functional setup but also with the best practices in cloud storage and Drupal management.

Key takeaways include configuration, role management, testing and validation.

Updating the settings.local.php file to seamlessly integrate with S3, using Flysystem for a unified file management interface.

Leveraging temporary AWS credentials, set via environment variables, enhances security by limiting exposure and potential misuse.

Assuming specific AWS roles ensures that your application adheres to the principle of least privilege, minimising potential security risks while maintaining necessary access.

Rigorous testing confirms that your setup not only meets technical specifications but also aligns with operational requirements, ensuring that file uploads and interactions are handled smoothly.

By the end of the setup, your local Drupal environment should be fully prepared to leverage AWS S3, ensuring high availability and performance of your media and static files, thereby reducing the load on your local servers. This integration not only prepares your Drupal site for higher traffic and data loads but also streamlines development and deployment processes across various environments.

Related articles

Andrew Fletcher10 Sep 2024
Resolving PHP GD library issues in Drupal
IntroductionFor a while now, one persistent issue has been bugging me: a warning on Drupal's 'status report' page that reads:GD librarylibrary bundled (2.1.0 compatible)Supported image file formats: GIF, PNG.Unsupported image file formats: JPEG, WEBP.Check the PHP GD installation documentation if...