Your Resource for All Things Apps, Ops, and Infrastructure

Compliance in the Age of Cloud: Security Matters. Know what you are Doing.

Verizon, publisher of the Data Breach Investigations Report, had their own fill of security problems this past week. It may not be fair to state Verizon is at fault, but the operator of their call centers, NICE Systems, inadvertently put millions of Verizon’s customers at risk.  

NICE Systems, an Israeli based technology company serving major companies and nation-states, improperly configured the permissions on an Amazon S3 bucket holding customer data. This is the same misconfiguration that the global management consulting firm, Booz Allen Hamilton, was discovered to have made, exposing over 60,000 DoD files to the public. New in the news this week is that University of Iowa Health Care (UIHC) just got hit with the same issue.

All of these incidents highlight two overwhelming issues with public cloud adoption.

    • First, there is a lack of understanding of basic platform security controls as they apply to the public cloud environment.
  • Second, though closely related, is a lack of governance policy and guidance around controlling the assets rapidly moving to these cloud providers. This lack of understanding and lack of policy is not a cloud provider problem, it is a cloud consumer problem.

AWS Security Methodology and Protocols

AWS gives users the ability to “tag” assets with information about the system’s use. For example, an S3 bucket could be tagged with the label “PCI”, “DoD”, or “Call Transcripts”. This type of data classification is fundamental to a strong security program, and AWS’s tagging ability allows data classification to be accomplished with greater ease than ever before.

Aligning your data classification strategy to implementation of a tagging strategy allows for automated scanning of assets to ensure the proper security controls are in place. Tagging allows organizations to scan their environment in an automated fashion, with various tools ensuring that proper security controls are in place based on the sensitivity of the data.

This type of automated testing eases the implementation of traditional infrastructure governance practices; what was once a tedious process for on-premises infrastructure becomes programmatic and inherent. Large organizations like Verizon and Booz Allen Hamilton have strict compliance standards they must adhere to.  When data moves to the cloud and can be shared on systems which can easily be made “public” without firewalls, routing, or other access control methodologies, these compliance policies become more critical than ever.

However, the startling trend we’ve recently seen is that cloud-based assets do not seem to be under the same controls as their on premises counterparts or, potentially, that a rapid and fundamental shift in education and policy creation for protection in the cloud is necessary. In both of the cases referenced above, the root problem was security settings on an S3 object store.

AWS provides documentation on how to achieve various compliance standards to prevent these kinds of mistakes.

The TL; DR Version

The same controls that are applied to on premises workloads should be applied in the cloud! The cloud brings benefits of speed and agility, but without the guard rails of governance and compliance, customer data can be placed at serious risk.

All of this highlights the need to employ the “shared responsibility model”.  With an on-premises data center, the organization is responsible for all layers of security, from physical security all the way through application security. With various cloud offerings, different levels of security are offloaded to the cloud provider, but still, should not be taken for granted or ignored.

As you use increasingly specific cloud services, greater levels of security are shifted from your team to the cloud provider. For example, if you’re using IaaS (infrastructure as a service), and run an application such as MS-SQL, you’re completely responsible for Windows patching, and operating the database in a secure fashion, while the cloud provider owns the physical and infrastructure security layer.

Conversely, if you’re using MS-SQL PaaS (platform as a service), you’re no longer responsible for patching the underlying Windows operating system or maintaining a security configuration for the database software.  The cloud provider absorbs those critical security responsibilities… but you still own access control and application security for the applications which leverage the database.

Regardless of the model and scenario, the user will never completely absolve themselves of security responsibilities, and that is why it’s critical to understand how cloud services are secured.

As either a broker of services within an enterprise to the greater business or as a direct consumer of cloud services, start by using the existing tools provided to proactively setup, configure, and manage your public cloud environments. This not only includes all of the core services needed for an application, but also the security and governance policy requirements. In the instance of the most recent leak, this would be the fundamental controls that should have been applied to the bucket configured by NICE. It is worth noting that S3 buckets are not publically accessible by default, meaning someone modified this configuration to enable access.

How can the enterprise, or any consumer of today’s public cloud offerings, ensure that their applications and data are secure?

We’d like to offer two potential offerings that would, with varying degrees of complexity, help cloud consumers at any stage of their evolution.

1. Go on the Offensive

CloudFormation templates allow you to define your Application, Platform or Infrastructure as code. This includes the common services that applications need such as EC2 (compute), S3 (storage), and DynamoDB (NOSQL DB) for example, but also CloudFormation templates allow the admins to define policy items like tags, types of services exposed to the end user, the amount of said resource that can be consumed, as well as the ability to run scripts that cannot be modified (think template hardening, one time installs, etc.).

Now, combine that with a user portal with RBAC, and you can slice and dice these CloudFormation templates to specific users and applications, while proactively enforcing governance and consistency (while having a complete audit trail and reportability).

In the above CFn Template example, the ‘AccessControl’ Key was incorrectly set to ‘Public’. The correct value, as the leaks were PII data, should have been ‘AuthenticatedRead’ or ‘Private’.

2. The “OH NO!” Moment

Your company is two years into your AWS journey, and you have some concerns over your AWS GRC profile with no idea where to start. Maybe your application architectures are very unique or your consumers need direct access to the AWS portal, thus CloudFormation templates may not be a viable option to set and keep policy. There is no one answer to this “problem”. Every business has differing Security and Governance requirements, as well as the policies on how they enforce this, so let’s look at some possibilities to solve this when in a reactive mode:

  • Utilizing an orchestration engine or Lambda to run on a cadence to look for new or existing services deployed in the environment, and verifying they have the proper tags, and that the tags have a valid value. If they do not, automation could create an alert within an ITSM system or simply decommission them completely from the environment. (Look for a future post highlighting an example of how to use native AWS functionality with Lambda in this space.)
  • There are also tools like CloudCheckr that will discover your environment and will show you predefined issues or items you should consider as security concerns, including items you can define, report, and take action on.
Example CloudCheckr Security Report
  • Lastly, using a log aggregation tool such as Splunk to ingest AWS CloudWatch, and more importantly, CloudTrail to detect changes in the environment. After a Splunk filter has been configured, Splunk will actively create alerts, and can also create tickets within an ITSM system when drift has been detected.

Whether you are exploring entering the public cloud space, have been in the public cloud space for months or years, or have robust practical experience with the enterprise public cloud model, AHEAD can help. We offer a wide range of cloud and security consulting services: from Public Cloud Health Checks to Enterprise Public Cloud Built-outs with Tagging, Governance and Consumption Strategies, as well as an AWS Public Cloud Managed service with these strategies built around automation.

AHEAD believes that in any Enterprise’s Cloud Delivery Framework (CDF), security is a foundational element that should be part of the base design and ongoing maintenance, and a valuable tool for auditors and compliance experts.

Architecting well in the beginning matters.   

Measuring, monitoring, and enforcing compliance matters.

Implementing systems to notify of change or risk, as well as the business processes to audit and respond… matters.

All of these are areas of focus that AHEAD is currently wrapping into everything we do with Cloud, whether in the data center or in a shared-risk model with a service provider. We are already actively designing, implementing, maintaining, and enforcing the CIS (Center for Internet Security) Foundations Benchmark with customers at all stages of their cloud evolution today.

If you’d like to ask more, need help identifying areas of risk, or are looking for ways to improve your visibility into your security compliance posture – let us know.

Subscribe to the AHEAD i/o Newsletter