AWS Security Specialist Certifications Notes
-
CloudWatch Events more real time alerts as compared to CloudTrail
-
Understand Systems Manager
-
Systems Manager provide parameter store which can used to manage secrets (hint: using Systems Manager is cheaper than Secrets manager for storage if limited usage)
-
Systems Manager provides agent based and agentless mode. (hint: agentless does not track process)
-
Systems Manager Patch Manager helps select and deploy operating system and software patches automatically across large groups of EC2 or on-premises instances
-
-
S3 Glacier Vault Lock helps you to easily deploy and enforce compliance controls for individual S3 Glacier vaults with a Vault Lock policy. You can specify controls such as “write once read many” (WORM) in a Vault Lock policy and lock the policy from future edits.
-
If the KMS key policy is allowing the EC2 role to encrypt/decrypt, then even if thfe IAM role of the EC2 is not having explicit allow on the key, then also EC2 will be able to use the key since they are in same account. CHECKKKK: If a role/user doesnt have explicit allow on KMS but the KMS policy is allowing the access, is the IAM user able to use the key?
-
DynamoDB and S3 uses VPC Gateway endpoint instead of VPC interface Endpoint to communicate with resources inside VPC privately. An interface endpoint is powered by PrivateLink, and uses an elastic network interface (ENI) as an entry point for traffic destined to the service. A gateway endpoint serves as a target for a route in your route table for traffic destined for the service.
-
Lost Private SSH key of EC2: Solution: Amazon EC2 stores the public key on your instance, and you store the private key. Detach the root volume from the EC2, attach it to another Ec2 as data volume and edit the authorized_key file to store newly created SSH public key and attach the volume back to original EC2.
-
Cognito user pool: User pool has individual profiles which you can manage to allow them to SSO via FB, Google, SAML Identify provider etc
-
Organization CloudTrail : To create an organization trail in the CloudTrail console, you must sign in to the console using an IAM user or role in the management account with sufficient permissions. If you are not signed in with the management account, you will not see the option to apply a trail to an organization when you create or edit a trail in the CloudTrail console.
-
VPC Peering: VPC peering can connect VPCs in same account, in different accounts and in different regions also.
-
Event Bridge vs CW: Amazon EventBridge is the preferred way to manage your events. CloudWatch Events and EventBridge are the same underlying service and API, but EventBridge provides more features. Changes you make in either CloudWatch or EventBridge will appear in each console.
-
Real Time Analysis: Perform Near Real-time Analytics on Streaming Data with Amazon Kinesis and Amazon Elasticsearch Service
-
KMS Key rotation: Imported key materials are not auto rotated
-
Lambda@Edge: Lambda@Edge provides the ability to execute a Lambda function at an Amazon CloudFront Edge Location. This capability enables intelligent processing of HTTP requests at locations that are close (for the purposes of latency) to your customers. To get started, you simply upload your code (Lambda function written in Node.js) and pick one of the CloudFront behaviors associated with your distribution.
-
Instance MetaData has following info about the ec2: AMI-id, hostName, IAM, instance-type, mac, Profile, Public-keys, secuirty-groups. To get this info, login to Ec2 and type below command:
-
curl 169.254.169.254/latest/metadata
-
We can push one-time SSH key into metadata which can be used and is more more secure. THis is how AWS Connect works with EC2.
-
Users who have access to EC2 can make below command to fetch access key, secret access key and temp token, this can then be exploited till this temp credential gets expired:$ curl 169.254.169.254/latest/meta-data/iam/security-credentials/my_Instance_IAM_roleThis will return access key, secret access key and temp token and token expiry. Hence this needs to be blocked using IPTables.
-
useradd developersudo su -curl 169.254.169.254/latest/meta-data/iam/security-credentials/first-iam-roleIPTABLES:
iptables --append OUTPUT --proto tcp --destination 169.254.169.254 --match owner ! --uid-owner root --jump REJECT
-
It's not recommended to hardcode credentials using aws configure on EC2 instances, hence instead just associate an IAM role to the instance with appropriate permission and delete any hardcoded credentials. After IAM role is added and credentials are removed, we can still make CLI commands from EC2 even though nothing is present inside aws configure because instead of using those creds, the EC2 would be IAM role instead.
-
Policy Variable
Policy variable allows us to create a more generalized and dynamic policy which can take some values at run time. For ex: “arn:aws:iam::888913816489:user/${aws:username}“
The above resource ARN will update the ARN based on the IAM username being used, hence this policy can be attached to many different users and we wont have to mention unique ARN everytime.
NotPrincipal: When we use NotPrincipal with action “deny”, it means to explicitly deny actions to all other users but not the users/principals mentioned in the NotPrincipal.
Lambda + KMS
Securing environment variables https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html
Lambda encrypts environment variables with a key that it creates in your account (an AWS managed customer master key (CMK)). Use of this key is free. You can also choose to provide your own key for Lambda to use instead of the default key.
When you provide the key, only users in your account with access to the key can view or manage environment variables on the function. Your organization might also have internal or external requirements to manage keys that are used for encryption and to control when they’re rotated.
To use a customer managed CMK
1. Open the Functions page on the Lambda console.
2. Choose a function.
3. Choose Configuration, then choose Environment variables.
4. Under Environment variables, choose Edit.
5. Expand Encryption configuration.
6. Choose Use a customer master key.
7. Choose your customer managed CMK.
8. Choose Save.
Customer managed CMKs incur standard AWS KMS charges.
No AWS KMS permissions are required for your user or the function’s execution role to use the default encryption key. To use a customer managed CMK, you need permission to use the key. Lambda uses your permissions to create a grant on the key. This allows Lambda to use it for encryption.
This is because the KMS key policy of an AWS managed key has condition key which only allows that particular service to use the kms key.
————————————————————————————-
Ex: {
“Version”: “2012-10-17”,
“Id”: “auto-elasticfilesystem-1”,
“Statement”: [
{
“Sid”: “Allow access to EFS for all principals in the account that are authorized to use EFS”,
“Effect”: “Allow”,
“Principal”: {
“AWS”: “*”
},
“Action”: [
“kms:Encrypt”,
“kms:Decrypt”,
“kms:ReEncrypt*”,
“kms:GenerateDataKey*”,
“kms:CreateGrant”,
“kms:DescribeKey”
],
“Resource”: “*”,
“Condition”: {
“StringEquals”: {
“kms:CallerAccount”: “770573759869”,
“kms:ViaService”: “elasticfilesystem.us-east-1.amazonaws.com”
}
}
},
{
“Sid”: “Allow direct access to key metadata to the account”,
“Effect”: “Allow”,
“Principal”: {
“AWS”: “arn:aws:iam::770573759869:root”
},
“Action”: [
“kms:Describe*”,
“kms:Get*”,
“kms:List*”,
“kms:RevokeGrant”
],
“Resource”: “*”
}
]
}
– kms:ListAliases – To view keys in the Lambda console.
– kms:CreateGrant, kms:Encrypt – To configure a customer managed CMK on a function.
– kms:Decrypt – To view and manage environment variables that are encrypted with a customer managed CMK.
You can get these permissions from your user account or from a key’s resource-based permissions policy. ListAliases is provided by the managed policies for Lambda. Key policies grant the remaining permissions to users in the Key users group.
Users without Decrypt permissions can still manage functions, but they can’t view environment variables or manage them in the Lambda console. To prevent a user from viewing environment variables, add a statement to the user’s permissions that denies access to the default key, a customer managed key, or all keys.
AWS Cloud Watch Agent Use Cases
AWS Cloudwatch logs service has the capability to store custom logs and process metrics generated from your application instances. Here are some example use cases for custom logs and metrics
-
Web server (Nginx, Apache etc ) access or error logs can be pushed to Cloudwatch logs it acts as central log management for your applications running on AWS
-
Custom application logs (java, python, etc) can be pushed to cloudwatch and you can setup custom dashboards and alerts based on log patterns.
-
Ec2 instance metrics/custom system metrics/ app metrics can be pushed to cloudwatch.
Ephemeral ports
The example network ACL in the preceding section uses an ephemeral port range of 32768-65535. However, you might want to use a different range for your network ACLs depending on the type of client that you’re using or with which you’re communicating.
The client that initiates the request chooses the ephemeral port range. The range varies depending on the client’s operating system.
-
Many Linux kernels (including the Amazon Linux kernel) use ports 32768-61000.
-
Requests originating from Elastic Load Balancing use ports 1024-65535.
-
Windows operating systems through Windows Server 2003 use ports 1025-5000.
-
Windows Server 2008 and later versions use ports 49152-65535.
-
A NAT gateway uses ports 1024-65535.
-
AWS Lambda functions use ports 1024-65535.
For example, if a request comes into a web server in your VPC from a Windows 10 client on the internet, your network ACL must have an outbound rule to enable traffic destined for ports 49152-65535.
If an instance in your VPC is the client initiating a request, your network ACL must have an inbound rule to enable traffic destined for the ephemeral ports specific to the type of instance (Amazon Linux, Windows Server 2008, and so on).
In practice, to cover the different types of clients that might initiate traffic to public-facing instances in your VPC, you can open ephemeral ports 1024-65535. However, you can also add rules to the ACL to deny traffic on any malicious ports within that range. Ensure that you place the deny rules earlier in the table than the allow rules that open the wide range of ephemeral ports.
S3 bucket ownership
-
If object is being uploaded from a different account, then the uploader is the owner to the objects and the bucket owner will not have access over the object.
-
To give bucket owner access over the uploaded object frm different account, the uploader needs to make sure that the ObjectACL includes -grant-owner-access option so that the bucket owner becomes owner of the uploaded object as well.
Security group vs NACL
-
A network access control list (ACL) allows or denies specific inbound or outbound traffic at the subnet level.
-
If NACL rule 100 allows all traffic and nacl rule 101 denies a particular IP, then because first the rule 100 will be evaluated due to being lower in number, the traffic will be allowed and that explicit deny would not work. Hence Deny rules should always be of lower number to be evaluated first.
A role/user should not have permissions to view its own policy since if the creds got compromised, the attacker will use that permissions to view what all services is has access to and will attack those.
Pacu permissions enumeration: its like metaexploit for aws cloud
https://www.youtube.com/watch?v=bDyBAFX4V7M. : BHack 2020 – Step by step AWS Cloud Hacking – EN
SSM
SSM Agent is preinstalled, by default, on Amazon Machine Images (AMIs) for macOS. You don’t need to install SSM Agent on an Amazon Elastic Compute Cloud (Amazon EC2) instance for macOS unless you have uninstalled it.
https://docs.aws.amazon.com/systems-manager/latest/userguide/install-ssm-agent-macos.html
SSM agent comes pre-installed on most amazon linux ami. Just check status by:
sudo systemctl status amazon-ssm-agent
https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-status-and-restart.html
We can run shell commands on the Ec2 instances on which the SSM agent is running:
Go to SSM –> Select ‘Run Command’ from right side panel and choose “‘Run Command’ button-> Choose platform ‘Linux’ from ‘command document’ filter and then choose “AWS-RunShellScript”.
generate SSH key using:
ssh-keygen -t rsa -N “” -q -f feb2022Then scroll down to “Command parameters” and paste the generate ssh
key with the command:
“#!/bin/bash”, “/bin/echo -e “ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDqTZn4XIVvEcjaJv2GsUZ8aqH+eGlPQTIqnnhPS6yPAXOuuz5wnlQxq+GcNr9y+EzuDaigdzTTqAuSLc7LhN0C9P5psCytcX3kRa62m/o4i2wg7kvmXq7frmBre41lh47Dymy2ia8Qax0v7TP21hd5GWGg8N/zFxD7zD3cEj3D5Ic1qfXPbvappe+kNMQ7wvlfOHTbIpNlmzTJqj0pc9eTD02WROB/cyTyECQKdS8CZMg1ePSAnVvWy7gCDt8LPKBxIu4uKfwLhy7QK3ZwC1gSIy/yLvjv04uzd5r05hqj8+9GY7qlWyUUImS2m1OaPdn8LH1/QBlJWDbRIEL6PLZTIE0XRHZAWBf7WZKmFS4Eo0M6Jwpkc8Rq7zDgvC/xNES5ZQ4UOT+z8vgJ6Fc/KZAg8oGoZjg5bF9cq4GEP/V0congFsyd7T7g1UGv4oU3Yzer/vFJJWXiYLhWcoT4zO6pAm1vjlz5D9Y/60FghPo5yvX4KhvRK5nYrwKXeM/klN0= jaimish@JAIMISH-M-C62F” >> /home/ec2-user/.ssh/authorized_keys”, “”
Using AWS Policy Simulator
Use this link to select users/groups/roles from right side panel, and then choose the service on which you want to test access from the right side of the screen. We can search for that service and upon selecting it, it shows all the possible api calls that can be made. Select the API calls that you want to test and click on “Run Simulation”. It will show if you will get access denied or not.
AWS Access Analyser
we can use IAM access analyser just by enabling it. After we create an analyser and enable it, it shows findings about all the resources(mostly resources which have resource based policy).
The finding explains if there is too much permissions given to any of these resource and what are those permissions. We can use this data to make the policy much more strict since the finding would show how exposed that policy makes it.
After solving the issue of the finding, the finding goes under resolved.
AWS WAF
WE can use WAF to safe guard our resource such as API gateway. We have to create
Web ACLs and in this we select the resource for which we are creating this ACL. Then we choose a RULE which will come preconfigrued or we can create our own rule for this. Now, this rule protects the server so if a DDOS starts, the rule will return forbidden message to the attacker.
-
Firewall usually operate at level 3 or level 4 of OSI model ie network and transport layer. Due to this they dont read http packets leaving applications running on level 7 vulnerable such as sql-injection, cross-site scripting. These are layer 7 attacks, hence something is needed to safeguard against layer 7 for such type of attacks. We do this using rules ie WAF
-
Rules are created against http request since we want protection against layer 7 requests
-
Multiple rules can be used with each other with having logics between them such as OR, AND etc. We can have conditions such as “if” traffic is from “china” AND “if” IP belongs to “BlackList”, then “Block” traffic.
WebACL: Container of all the things + default actions
Rule Statement : if “country”==”China”. Statements like this with conditions
Rules: Contains multiple rule statement. There are 2 types for rules:
1. Regular Rule (ex if request comes from China, block it)
2.Rate based rule (if too much data being transferred in short time, block the traffic)
Association: This defines to which entity WAF can be associated to defend it. Ex: WAF cannot be directly associated to EC2.
WAS Works with
-
ALB
-
CloudFront Distribution
-
API Gateway
There are preconfigured “AWS Managed Rule” which we can directly use on WAF such as ‘SQL Injection Protection’. This allows us to easily setup WAF without us creating our own rules.
Captcha: We can add “Captcha” which would ask the user accessing the ALB to authenticate/solve the captcha first (or some other resource instead of ALB)
We can also reply with custom response to the rules which would show the response to the one who is trying to access the url.
questions for this assignment
How will you ensure that you have an appropriate incident response strategy?
What AWS tools should you use to prepare in advance for incident handling?
How will you ensure sensitive information is wiped post investigation?
What Cloud Security Best Practices are you following today?
While the concerns and issues vary widely from company to company and industry to industry, each business must be able to answer these three key questions. For the most critical application in your enterprise, Do you know
– Who has access to which applications and when?
– How do you monitor for key file changes at OS and application level?
– How will you be notified in a timely manner when something anomalous occurs?
Use below PDF for that security course materials and resources
DNS Zone Walking
DNS zone walking basically enables you to fetch the DNS record information of a domain. It will retrieve the information from the dns zone and give info of IP, SOA etc
AWS Security Hub
AWS security hub fetches findings from:
-
AWS GuardDuty
-
AWS Inspector
-
AWS Macie
-
AWS IAm access analyser
-
AWS Firewall Manager
Only after you enable security hub, the new findings are part of console and not the older once which existed before (check for latest info on this)
For compliance info, AWS Config is used
It acts as a compliance dashboard and shows all this info in much understadable form.
-
Summary: Security Hub has ability to generate its own findings by running automated checks against certain compliance or rules but would be using info from sources . Uses config rules to check for compliance. For compliance bechmarks, enable config.
-
Insted if navigating dashboard to dashboard, we get all the security related details under a single dashboard.
-
Security Standards: Shows scores against CIS bechmark. Shows rules such as “Avoid use of root user”, and shows as failed if non-compliant. Gives result on individual rule to rule basis.
-
Findings: shows findings frm the sources. This would be mostly from inspector
-
Integrations: AWS Security Hub integrates with lot of services such as splunk, tenable, GuardDuty etc
-
It is a regional service
AWS SSM (System Managers)
Systems managers is basically used to to give remote commands to the agent installed on the ec2 instance. Using this we can run commands on an EC2 instance if the agent is installed. Some AMI such as default linux2 AMI has this agent pre-installed.
We can now see filesystem info of the instance along with OS level users and groups.
Fleet Manager: Fleet manager basically shows all the instances that have the ssm agent up and running. If agent and Ec2 instance are running then it would show as “Online”.
Sessions Manager: Session manager allows us to create a session to the ec2 instance.
-
Just select the EC2 and click on ‘start session’, this will start a new SSH sessions with the Ec2 instance and will open up a shell on browser to use the SSH connection established with the EC2.
Run Command: Allows us to run certain commands on the EC2 instances. This can also help if some command is required to be run on 100+ instances such as maybe update, for that also session manager’s run command feature will help us run all commands on remote EC2 having SSM agent.
-
We can specify on which instance we want to run the command
-
We can chose to run a shell script on instance specified
Compliance: It shows if the instances with SSM agent have some compliance issue such as missing patch or missing a new package and will show all the updates available which can be updated to. It would mark that instance as non-compliant under this feature.
Deploying SSM Agent: SSm agent does not have bydefault permissions on the instance and hence it is requried to be associated a role which the agent would use to make amends/fetch data from the instance.
-
It can be installed on Ec2 and onPrem servers as well
Amazon EC2 Instance Connect
Amazon EC2 Instance Connect provides a simple and secure way to connect to your Linux instances using Secure Shell (SSH). With EC2 Instance Connect, you use AWS Identity and Access Management (IAM) policies and principals to control SSH access to your instances, removing the need to share and manage SSH keys. All connection requests using EC2 Instance Connect are logged to AWS CloudTrail so that you can audit connection requests.
You can use EC2 Instance Connect to connect to your instances using the Amazon EC2 console, the EC2 Instance Connect CLI, or the SSH client of your choice.
When you connect to an instance using EC2 Instance Connect, the Instance Connect API pushes a one-time-use SSH public key to the instance metadata where it remains for 60 seconds. An IAM policy attached to your IAM user authorizes your IAM user to push the public key to the instance metadata. The SSH daemon uses
AuthorizedKeysCommand
and AuthorizedKeysCommandUser
, which are configured when Instance Connect is installed, to look up the public key from the instance metadata for authentication, and connects you to the instance.OverView of Sessions Manager
Ec2 Connect Sessions Manager
Instance IAM Role Not Required Required
Security Group (22) Required Not Required
Public IP Required Not Required
Ec2 Connect is an offering from EC2 service and doesnt require any extra setup to connect to ec2. Just have a public IP And port 22 open. Then just click on ‘connect’ and this will open up a ssh connection with our Ec2.
Security Groups Statfullness:
This basically means that if your own instances initiates a connection on a port, then you dont need to allow tht port specifically for the traffic coming in.
So just open ports like 22, 443, 80 in InBound rule and delete everything on the outBound rules side, this would still allow connections on above ports even though no OutBound rule is present.
‘Session Manager’ is an offering from Session Manager service and hence requires a few steps like session manager should be installed and IAm role should be associated to Ec2. Also, Session Manager ‘quick setup’ should be run to setup Session Manager service and this surely costs extra. Advantage of using Sessions Manager option is that you dont need to open port 22 on Ec2. The Sessions Manager would connect to Ec2’s SM agent internally, hence a preferred way to connect to private Ec2 instances without exposing them.
-
We can use Ec2 connect from the EC2 service to ssh into Ec2 directly (rather using putty or something).
-
Session manager also has the ability to let you ssh to your ec2 instance from Sessions Manager console
-
The above comparision shows that Iam role is not required for Ec2 connect from EC2 console similarly for sessions manager to connect , it doesnt need IAM role. However for EC2 connect the security group and Public Ip needs to be open but sessions manager can let you ssh into the instance without need of public IP.
-
So basically we can use Sessions Manager to ssh into EC2 instances inside private subnet. Normally since private ec2 instance would require a bastion host to ssh into it. But its easier and cheaper to use sessions manager, but make sure the sessions manager agent is installed and running on these Ec2 instances. Also remember sessions manager requires an IAm role on ec2.
Benefits of using sessions manager
-
centralized access control using IAM policy instead of normal SSH which cant be controled and anyone with the key can ssh.
-
no inbound ports needs to be open
-
logging and auditing session activity. This shows up on “Session History”. We can further check “General Preference” card of sessions manager which informs us about CloudWatch logging. This logging needs to be enabled from CloudWatch. But also remember to edit Ec2 role so that it has permissions on CloudWatch log group so that it can send the logs to CloudWatch, just edit the role of that EC2 and attach “CloudWatchLogsFullPermissions” IAM policy to this EC2 role.
-
One-click access to instances from console and CLI
-
No need of VPN to connect to instance
Document Feature: Sessions manager has some pre-configured ‘common documents’ which basically allows us to some quick deployment.
Ex :
-
AWS-RunAnsiblePlayBook
-
AWS-ConfigureDocker (installs docker on Ec2 remotely)
-
AWS-InstallMissingWindowsUpdates
-
AWS-RunShellScript
-
many more
Patch Manager
It automates the patching process with both security related and other types of patches. It can identify /scan the required patches and do it for us
Maintainance window: Provides a way to schedule the patching of systems. Cron rate would help with deploying patch at least traffic
Parameter Store of SSM
-
It is used to centrally store configurations and credentials or even plaint text data
-
we can choose secret type as ‘SecureString’ which uses KMS to encrypt our data. While viewing this, use –with-decryption to view the secret otherwise since it is encrypted it would give a random big word
SSM Automation
-
We can automate a lot of tasks using this such as stopping instances, blocking open SG etc
-
These automation also supports ‘approval’ which uses SNS to send email to subscriber. Only the IAm user/role mentioned as approver of this automation will be allowed to approve and no other user can approve this automation request.
SSM Inventory
-
SSM provides a way to look into EC2 and on-premises configurations of the resource
-
It captures Application names with versions, Files, Network Configurations, instance details, windows registry and others
-
We can query our inventory to find which specific application is stored on which instance. This helps to centrally check if a particular application is present all our EC2 instances or on-prem servers which version info.
-
We can send this info further to S3 and then use athena to query this using resource data sync.
-
We will have to setup the inventory first. We can choose which instance to include, also what to include and what not to include in the inventory and also the frequency of this sync. We can also select S3 sync to sync the data to S3.
CloudWatch Unified Agent
-
It can fetch resource metric and also collect logs from the host. Logs for ec2 are present in log groups. metrics inside CW dashboard allows us to see Memory, CPU etc metrices.
-
after installation of this agent on ec2, the process requires to run ‘configuration wizard.’ In this you will choose the host OS and also which type of metric collection you want. It has basic, medium, advanced varying in their granularity and what kind of metrics are to be collected. After this configuration is done, then CWAgent is run for the first time.
AWS EventBridge
-
EventBridge delivers a stream of real-time data from event sources to targets. Ex: Ec2(EC2_Terminate) –>EventBridge—-> SNS(Alert Sent regarding terminated Ec2).
-
Further, we can schedule this alerts, hence use EventBridge with a source as “Schedule”. So at specific time/schedule the targets will be triggered.
___________________________________________________________________________________________________________________________
Athena
It is used to make SQL like commands on data that is present inside S3 bucket.
-
It can be used to query large dataset maybe from CloudWatch or CloudTrail logs. We can query such logs directly using Athena.
______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
Trusted Advisor
-
It analyses the AWS environment and provides best practice recommendations in 5 major catergories:
-
Cost Optimation: Recommendations that allows us to save money such as “Idle Load balancers”.
-
Performance: Shows recommendations that helps improves performance such as “EBS volume causing slowdown”
-
Security: All security related advisories.
-
Fault Tolerance: Recommendations that would help us make the resource more fault tolerant. Ex: “Checks if all Ec2 are in different AZ so that if one AZ goes down the other Ec2 are not affected.
-
Service Limits: Checks limits of services and lets us know if any service is breached or will breach any limit
-
-
Each category has number of rules, such as security will have many more rules such as “IAM Password policy”, “CloudTrail Logging”. But not all are enabled by default, but to enable these other rules, you will have to upgrade your support plan. After upgrading support plan, you will get access to all the trusted advisor checks.
______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
CloudTrail
-
3 Types of EventTypes are logged if selected:
-
1. Management Events: Normal events being performed on our AWS resources
-
2. Data Type: S3 related logging such as PutObject, Getobject, DeleteObject. We can select if we only want readEvent or WriteEvents. These Events can be seen on S3 buckets or CloudWatch Log group level. (These events wont show up on CT Dashboard, hence can only see in S3 or CW log group. OR send this trail data to Splunk and query). Preferred way would be using CloudWatch log Groups.
-
3. Insights Events: Identify unusual errors, activities or user behaviour is your account. Ex : ‘spike in Ec2 instances being launched’.
-
-
CT Log File Validation: This allows us to confirm if the log files delivered to S3 bucket have been tempered or not. Validation lets us know if there has been a change in the file or not. Does this using RSA and SHA-256 hashing.
-
Validation options comes under advanced settings of a trail. Download the zipped trail, and copy paste this data to convert it into SHA-256.
CloudTrail data events
CloudTrail data events (also known as “data plane operations”) show the resource operations performed on or within a resource in your AWS account. These operations are often high-volume activities. Data events are activity that doesnt change any configuration of the resource.
Example data events
-
Amazon Simple Storage Service (Amazon S3) object-level API activity (for example, GetObject, DeleteObject, and PutObject API operations)
-
AWS Lambda function invocation activity (for example, InvokeFunction API operations)
-
Amazon DynamoDB object-level API activity on tables (for example, PutItem, DeleteItem, and UpdateItem API operations)
Viewing data events
By default, trails don’t log data events, and data events aren’t viewable in CloudTrail Event history. To activate data event logging, you must explicitly add the supported resources or resource types to a trail.
For instructions to activate data event logging, see Logging data events for trails.
For instructions to view data events, see Getting and viewing your CloudTrail log files.
Note: Additional charges apply for logging data events. For more information, see AWS CloudTrail pricing.
CloudTrail management events
CloudTrail management events (also known as “control plane operations”) show management operations that are performed on resources in your AWS account. These events are for creating/modifying/deleting etc on a resource.
Example management events
-
Creating an Amazon Simple Storage Service (Amazon S3) bucket
-
Creating and managing AWS Identity and Access Management (IAM) resources
-
Registering devices
-
Configuring routing table rules
-
Setting up logging
________________________________________________________________________________________________________________
AWS Macie
-
AWS Macie uses Machine Learning to search for PII data, database backup, SSL private Key and various others in the S3 buckets and create alerts about the same.
-
Checks if buckets are publically exposed, or publically writable.
-
It has ‘Findings’ like guardDuty such as ‘SensitiveData:S3Object/Credentials’ , this finding comes up if macie finds credentials on S3 bucket.
-
So finding sensitive data if primary function, but it also checks open S3 bucket, encryption etc
-
Custom Data Identifier: We can further create custom regular express ie RegEx to mach a certain type of data present on s3 and if data present is matched, that finding type comes up. So scanning sensitive data does not stop with preconfiugured findings rather we can create our own regEx and get custom findings.
-
Scanning jobs needs to be scheduled using ‘scheduled job’ for repeatitive scanning or you can go ahea with ‘one-time job’ for just one time scanning. You can check this running job inside ‘Job’ section.
-
These findings can be stored for long time but needs configuration.
________________________________________________________________________________________________________________
S3 Event Notification
-
S3 event notification allows us to receive event notification when certain event happen in your bucket.
-
Event Types: These are the events on which you would recieve alerts for, such as ‘all object create events’ would alert on PUt, Post, Copy and Multipart upload.
-
Event notification can then triggered or send data to different destinations:
-
lambda
-
SQS
-
SNS
-
-
So basically we can get SNS alert if someone does performs some specific tasks on s3 bucket.
________________________________________________________________________________________________________________
VPC FlowLogs
VPC flow log is like visitors register, it keeps logs of who visited the aws environment. VPC flow logs basiscally captures traffc on network interface level.
VPC Flow logs is a feature that enables you to capture information about the IP traffic going to and from ‘network interfaces’ in your VPC.
VPC Flow logs capture:
-
Record traffic information that is visiting the resource (eg EC2). Ex: If Ip 192.12.3.3 is visiting my Ec2, then it would record this detail about the IP. ie 192.12.3.3 —-> Ec2 instance
-
Records data if resource connected to any outbound endpoint. Ex: EC2—>193.13.12.2
VPC flow log dashboard helps us setup dashboard to get idea as to from where or which country the traffic is coming in.
All the information of VPC flow logs is stored in CloudWatch log groups which gets created automatically 15 mins after VPC flow logs are enabled.
Destination of VPC Flow Logs can be :
-
CloudWatch Logs Group :By default, once setup, log streams are stored in log groups
-
S3 bucket : We can optionally push these logs into S3
-
Send to Kinesis Firehose same account
-
Send to Kinesis Firehose Different account
Sending to Kinesis From CW logs:
We need to mention the destination Kinesis Firehose stream arn.
We can also create custom log format to get these logs.
Basically all the info present in stealthwatchCLoud ie SWC/SCA is on VPC flow logs since it has destination IP, source IP, accept/reject traffic, port etc
VPC flow logs captures interface level logs and not real time. We can choose to captures interface level flow logs of a single instance but its better to log complete vpc.
We can choose a particular network interface from network interface dashboard. Now select flow logs of that network interface, we can setup a new one. When we individually go and enable flow logs for one network interface at a time, this would be interface level flow logs. Its just vpc flow log but for a particular network interface.
Its not as real time as cloudWatch and often 10-15mins late.
Log Format of VPC Flow log
-
version: VPC flow logs version
-
account-id : AWS account id
-
interface-id : network interface id
-
sraddr: The source address
-
destaddr: destination IP for outgoing traffic
-
srcPort: Source port
-
dest port: Destination port
-
protocol: The protocol number
-
packets: number of packets transferred
-
bytes: number of bytes transffered
-
start: start time in unix seconds
-
end: end time in unix seconds
-
action: accept or reject traffic
-
logging status of flow log
Ex of a vpc flow log (cross check with above format):
2 123456789010 eni-1235b8ca123456789 172.31.16.139 172.31.16.21 20641 22 6 20 4249 1418530010 1418530070 ACCEPT OK
Data From Sources(ex CW log groups) Subscription filter—> Kinesis Data Stream —> kinesis Firehose —> Kinesis Data Analytics/DataSplunk/S3 etc
Or
(new)
Directly From VPC flow logs —> Kinesis Data Stream (for storage)—> kinesis Firehose(for processing data) —> Kinesis Data Analytics/DataSplunk/S3 etc (for analysing the data)
Earlier VPC flow log destination used to show just S3 and CW log groups while setting up VPC flow logs. And then from CW logs, using subscription filter, we used to push logs into Kinesis Stream. However now we can directly send VPC flow logs to Kinesis Stream.
Ask what is the advantage of directly sending vs first collecting data into CW log group and then sending.
Which is costly
Should we prefer sending to central CW log group if we are logging from multiple accounts?
________________________________________________________________________________________________________________
AWS Kinesis (Streaming)
Kinesis is basically used for real time data storage, processing and visualizations.
It has 3 main parts:
-
Kinesis Stream: Kinesis stream is place for Data collection and storage. So all the logs or any data that needs to be processed and sent, first needs to be put into Kinesis Stream. All data is first collected here. Kinesis stream can receive data directly From VPC flow logs and also from CW log groups.
-
Kinesis Firehose: Firehose is the 2nd stage which is used to process the data and deliver is to destinations. Just like water is first stored in tanks, the data is stored in Kinesis Streams and just like a pipe with tap is used to give us water, the firehose is responsible for processing the data such as compressing it or filtering it and then firehose pushes it into destination. hence this is 2nd stage.
-
Kinesis Analytics: This is optional 3rd stage. Usually Firehose will deliver data into Splunk HEC but we can also choose to send the data into Kinesis analytics for analysis if we want.
Data Sent from different Sources —> Goes into Kinesis Stream for storage—-> Kinesis stream pushes it into Firehose for processing and dilevery—> Firehose Delivers data to Slunk HEC or S3 buckets for backup or kinesis Analytics
-
Multiple sensors could be sending continuos data, this data can be stored and processed using Kinesis.
-
This requires 2 layers:
-
Storage Layer
-
Processing Layer
-
-
3 Entities are invovled in this process:
-
Producers: Data producers like sensors
-
StreamStore: Where data is gonna be stored like Kinesis
-
Consumer: This will run analytics on top of the data and give useful info
-
-
Kinesis Agent: This is a software application that is used to send data to Kinesis stream. Other ways could be done by using AWS SDK.
-
Consumers: We can Select “Process Data in real Time” under the kinesis stream dashboard. This will make kinesis as consumer and show some processed data. Other option could be ‘Kinesis Data Firehose’ which means using firehose delivery stream to process and store data.
-
After sending data to kinesis, it shares a shardID which could be used to refer to the data and fetch it.
4 Kinesis Features:
-
Kinesis Data Stream : Captures, process and store data streams in real-time
-
Kinesis Data Firehose: Primary to move data from point A to point B. Hence FireHose can be used to fetch data from Kinesis DataStream and then use FireHose to send this data to Splunk. When setting up, we get option to choose ‘Source’ as Kinesis Stream and choose ‘destination’ as splunk or S3(S3 can also be choosen as backup to send logs for backup).
-
Kinesis Data Analytics: Analyze streaming data in real-time with SQL/Java code. So we can use SQL to make commands and analyse data here.
-
Kinesis Video Steam: capture, process and store video streams
Best Centralized Logging Solution using CWlog group(not neccessary for VPC flow logs since VPC flow logs can be sent directly into Stream) + Kinesis Stream + Kinesis firehose + Splunk + S3
Use CW log groups for central VPC flow logs since if each vpc sends it logs personally into stream, this might cost more and doesnt seems centralised, rather we can just push VPC flow logs from different accounts into a central CW log group and then this log group can push logs into Kinesis Stream and then stream will send data into Firehose.
We can setup a centralized logging solution using below component for CW logs and VPC flow logs(also sent in CW logs):
For centralized logging of CloudTrail logs, Configuration is very simple and directly configured to send data to a central S3 bucket, no other thing is needed. But this is not the same for CloudWatch Logs or for VPC Flow logs.
Usually we have to setup Splunk so that it can get to each account’s cloudWatch logs and fetch appropriate logs. However, as a better deployment, we can send all the Cloudwatch logs from different accounts to a central location and then setup Splunk to just use Kinesis Data Streame (FireHose) to fetch logs. Hence splunk would not need to get into different accounts to get this data, it can just fetch all the data from a single Kinese Data Steam which is intern getting data from different accounts.
Centralized logging with Kinesis Firehose + Splunk HEC + s3
-
usually we dump data into S3 and then use Splunk HEC to fetch the data from that S3 bucket
-
However, We can leverage Kinesis Firehose here. So instead of sending CW logs directly to S3, we can rather set CloudWatch logs as source for firehose and setup a stream for firehose with source as CW logs.
-
Now Firehose gives a lot of options to play around with these logs, such as we can choose a lambda function to process the logs, change it, compress it, change its format etc. This can be done while setting up the stream for Firehose.
-
We can also Select S3 as backup. If we select s3 as backup, then firehose will also send data to S3 and then also send data to Splunk HEC
-
Splunk HEC would be selected as destination and all the logs will be pushed to splunk from Firehose.
https://www.youtube.com/watch?v=dTMlg5hY6Mk
(these accounts are sending CW logs)
Account A ————>
|———-> CWLogs destination<—>Kinesis Stream<–> Kinesis FireHose <—–>Splunk (also send a backup copy to S3)
Account B ————>
So like all the legacy accounts can send data to secOps account and use kinesis in SecOps account to send data to splunk.
The CloudWatch central destination cannot be created by GUI, can be done by CLI though. APi call is ‘put-destination’.
++https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CreateDestination.html
Steps to do above deployment
-
Create a kinesis dataStream in receving acccount.
-
Create an IAM role for CloudWatch central log group to send data to kinese data stream. This role will be assumed by CW and will send data to kinese stream.
-
Now use CLI to create CW destination which would recieve logs from different accounts. This destination would be in the receiving central account where kinesis is setup. So use CLI to use ‘put-destination’ command and in this command we mention the Kinese stream arn. This connects CW log destinatio to Kinesis so that CWlogs destination can put data inside the Kinesis stream. This would require CWlog destination to have assume role permissions to put logs inside Kinesis. And further add permissions policy so that it can make api call and put data inside kinesis. Now CW destination will assume this role, and then make api call Kinesis:PutRecord to send data into Kinesis. Remember to add PutRecords permission into Iam policy of the role being assumed by CW.
-
Lets say we want to send CloudTrail data to splunk. So in this step setup CloudTrail trail, and in that trail, configure CloudWatch Log group to send all the trails to CW log group. ORRR you can choose to configure VPC flow logs and send them to particular CW log group.
-
Make put-subscription-filter cli command so that logs can be sent to the CW logs destination.
Steps:
########## Recipent Account ###############
Receipent Account ID: [FILL-HERE]
Sender Account ID: [FILL-HERE]
1) Create Kinesis Stream
2) Create IAM Role to allow CloudWatch to put data into Kinesis
Trust Relationship
{
“Statement”: {
“Effect”: “Allow”,
“Principal”: { “Service”: “logs.region.amazonaws.com” },
“Action”: “sts:AssumeRole”
}
}
IAM Policy:
{
“Statement”: [
{
“Effect”: “Allow”,
“Action”: “kinesis:PutRecord”,
“Resource”: “arn:aws:kinesis:region:999999999999:stream/RecipientStream”
},
{
“Effect”: “Allow”,
“Action”: “iam:PassRole”,
“Resource”: “arn:aws:iam::999999999999:role/CWLtoKinesisRole”
}
]
}
3) Create CloudWatch Log Destination
aws logs put-destination \
–destination-name “testDestination” \
–target-arn “arn:aws:kinesis:ap-southeast-1:037742531108:stream/kplabs-demo-data-stream” \
–role-arn “arn:aws:iam::037742531108:role/DemoCWKinesis”
Output:
{
“destination”: {
“targetArn”: “arn:aws:kinesis:ap-southeast-1:037742531108:stream/kplabs-demo-data-stream”,
“roleArn”: “arn:aws:iam::037742531108:role/DemoCWKinesis”,
“creationTime”: 1548059004252,
“arn”: “arn:aws:logs:ap-southeast-1:037742531108:destination:testDestination”,
“destinationName”: “testDestination”
}
}
4) Associate Policy to the CloudWatch Log Destination
{
“Version” : “2012-10-17”,
“Statement” : [
{
“Sid” : “”,
“Effect” : “Allow”,
“Principal” : {
“AWS” : “585831649909”
},
“Action” : “logs:PutSubscriptionFilter”,
“Resource” : “arn:aws:logs:ap-southeast-1:037742531108:destination:testDestination”
}
]
}
aws logs put-destination-policy \
–destination-name “testDestination” \
–access-policy file://DestinationPolicy
######## Sender Account ###########
aws logs put-subscription-filter \
–log-group-name “CloudTrail/DefaultLogGroup” \
–filter-name “RecipientStream” \
–filter-pattern “{$.userIdentity.type = Root}” \
–destination-arn “arn:aws:logs:ap-southeast-1:037742531108:destination:testDestination”
Reference:
aws logs put-subscription-filter –log-group-name “CloudTrail/DefaultLogGroup” –filter-name “RecipientStream” –filter-pattern “{$.userIdentity.type = Root}” –destination-arn “arn:aws:logs:ap-southeast-1:037742531108:destination:testDestination” –profile aws2 –region ap-southeast-1
Kinesis Commands:
aws kinesis get-shard-iterator –stream-name arn:aws:kinesis:ap-southeast-1:037742531108:stream/kplabs-demo-data-stream –shard-id shardId-000000000000 –shard-iterator-type TRIM_HORIZON
aws kinesis get-records –shard-iterator “AAAAAAAAAAFLZtIq6kmA2juoKilfVONKUHYXRENk+9CSC/PAK3gBb9SAFYd5YzxMxisA/NLwMhOaj6WZVE6qzf0/d0op7Qk6M66317w4EKWiKLUbP/S2VWjvLk36Sn+KWmOVFmEzze7RE+whtaUIJuDgJsKmnNHDw8u1q28Ox6kj79Iplq3Mg1Chjv7zlv9uGeBeMaUkrmi/NAdJmQuLUGgnvtluu7KDJ6T1JP3M5GqwlO3HwK3gog==”
________________________________________________________________________________________________________________
Bastion Host
SSH port forwarding: This allows users to use their local SSH keys to perform some operations on remote servers without keys being left from workstation. This is secure deployement since having ssh keys on bation is risking the keys in case of bation host compromise.
Client WorkStation —-> Bation EC2 (Public Subnet)—> Private EC2
Normal Way (not so secure):
-
Create ssh keys on your instance (client) using ssh-keygen command. This would create 2 keys, one public and one private
-
Now cat the pub key on client and copy the content to the bation host. Basically we are trying to copy the public key fm client to bation. Hence in the ‘/authorised’ key, cat the public key.
-
Do the same with private Ec2 instance also, ie make sure to paste your public key under /authorized. All all 3 instances have same public key but only client has the private key as of now.
-
Now that all instances have public key, we can copy the private key from client to bation host and then use this private key on bation to ssh to private ec2, but storing private keys on bation hosts is not safe, hence we go with safer approach.
Safer way using SSH :
-
Use SSH forwarding instead of pasting private keys on bation host. Use Below steps:
-
Step 1: Create Public/Private Key in Remote Client EC2
-
ssh-keygen
-
Step 2: Setup Authentication
-
Copy the contents of public key form remote-client to the ~/.ssh/authorized_keys file of both Bastion and Private EC2.
-
Step 3: Use SSH Agent Forwarding
-
Run the following commands on the remote-client EC2 instance
-
exec ssh-agent bash
eval ‘ssh-agent -s’
ssh-add -L
ssh-add -k ~/.ssh/id_rsa
-
Step 4: Test the Setup
-
From remote-client EC2, run the following command:
-
ssh -A [BASTION-EC2-IP]
-
Once you are logged into Bastion, try to connect to Private EC2
-
ssh [IP-OF-PRIVATE-EC2]
-
Use Systems Manager to ssh into private Ec2 (very secure).System manager now also works with on-prem resources hence hybrid deployement is also supported. So if company has onPrem instance, use SSM (SystemsManager) to ssh into those onprem resources.
VPN (Virtual Private Network)
-
works similar to proxy
-
In AWS, VPN is deployed on public subnet and can be used to connect to private subnet resource
Issues with VPN software on EC2 instance :
-
what if vpn fails
-
patch management
-
upgradation
-
vpn server configuration is a hectic job
-
performance optimization
Solution::: AWS Client VPN
-
dont need to bother about above mentioned issues
-
only configuration needs to be done and it is not that complicated.
AWS Client VPN
-
We can use AWS Client VPN service to create our own VPN server to which we can connect from our EC2 instance and route internet safely
-
First certificates are generated since client has to present certificate to VPN server for ‘mutual Authentication’. SAML authentication is also possible.
-
Once we have certificate on our Ec2 instance which need VPN connection, now we can setup the VPN-Client service.
-
go to VPN service and setup VPN. It asks for Client’s CIDr which is basically a CIDR range from which an anonymous IP would be giving to our client by the VPN service. This anonymous IP is what makes our Ec2 secure since VPN would not show real IP of Ec2 to anyone.
-
Once Clien’t CIDR and ‘Mutual Authentication’ is setup, downloan OpenVPN software on Ec2 instance and this software would need a .OPNF file. This file is downloaded frm AWS after ClientVPN is configured. Download the file, append certificate, key to this file and upload to the OpenVPN software on your Ec2.
-
Now click on connect and openVPN software would connect to our own VPN server and provide anonymous access to internet.
-
Please make sure to make change to route table to the VPN server on AWS and make its route table routable 0.0.0.0 for internet access.
-
Also, after VPN client is created, it shows waiting association, so you need to use the ‘Associations’ tab to add the subnet to which our Ec2 instance belongs. All the subnets mentioned under this would be able to use the VPNClient for anonymous access.
-
In ‘authorization’ step, authozaition basically decided which clients have access to VPC. So setup authorization card for the same.
-
‘Connections’ card shows details about the connections like data traffered, active/disconnected etc.
Steps:
Step 1: Generate Certificates:
sudo yum -y install git
git clone https://github.com/OpenVPN/easy-rsa.git
cd easy-rsa/easyrsa3
./easyrsa init-pki
./easyrsa build-ca nopass
./easyrsa build-server-full server nopass
./easyrsa build-client-full client1.kplabs.internal nopass
Step 2: Copy Certificates to Central Folder:
mkdir ~/custom_folder/
cp pki/ca.crt ~/custom_folder/
cp pki/issued/server.crt ~/custom_folder/
cp pki/private/server.key ~/custom_folder/
cp pki/issued/client1.kplabs.internal.crt ~/custom_folder
cp pki/private/client1.kplabs.internal.key ~/custom_folder/
cd ~/custom_folder/
Step 3: Upload Certificate to ACM:
aws acm import-certificate –certificate fileb://server.crt –private-key fileb://server.key –certificate-chain fileb://ca.crt –region ap-southeast-1
Step 4: Copy the Certificates to Laptop:
scp ec2-user@IP:/home/ec2-user/custom_folder/client* .
Step 5: Central Client Configuration File:
Following contents to be added to central ovpn configuration file –
<cert>
Contents of client certificate (.crt) file
</cert>
<key>
Contents of private key (.key) file
</key>
Step 6: Prepend the DNS Name.
EC2/Resouce –>uses O
_______________________________________________________________________________
Site to Site VPN Tunnel (S2S)
-
Challenge: High availability is an issues in this case since if connecting ec2 goes down, everything goes down. Hence multiple tunnels are used instead of one.
-
Its like alternative to vpc peering.
Importance of Virtual Private gateway (VPG)
-
Virtual Private gateway has built-in high availability for VPN connections. AWS automatically creates 2 highly available endpoints , each in different AZ. These both act like a single connection but since there are 2 IPs(tunnels), it makes it more available architecture. This is how high availability is achieved from AWS side.
VPC Peering
VPC peering enables two different private subnets in different VPC to communicate with eachother. So if Ec2 A is in a private subnet and EC2 B is in a different VPC private subnet, normally they would need to route the traffic via internet, but with VPC peering, resources from one VPC can communicate to resource of a different VPC
without routing traffic via internet.
Inter region VPC Peering is now supported
VPC peering does not work the same as transit VPC
VPC Peering also connects one AWS account to another AWS account. So basically we can use VPC peering to connect 2 different VPC in 2 different AWS accounts.
IMp:
-
VPC peering will not happen on overlapping CIDR ranges. Ex: 172.16.0.0/16 —–X——cant -peet—–X—-172.16.0.0/16
-
Transit VPC access is not possible which means if VPC A and VPC B are connected to VPC C, then A<–>C and B<—>C is possible but you cannot reach B via C if you are A, ie A<—>C<—>B is not possible or A<—->B not possible.
Egress Only IGW
-
provide same usecase of NAT which doesnt allow traffic from outside to initiate a connection with the resources behind NAT gateway.
-
This is used for IPV6 mainly.
VPC Endpoints
-
Usually when an EC2 wants to connect to some resource such as S3, it needs to route the traffic via using internet gateway, hence we can use VPC endpoint so that EC2 to S3 connection can become private and traffic doesnt route via internet.
Issues with routing traffic via internet:
-
High data transfer cost via internet
-
Security issues
-
High latency
-
Can bottleneck the internet gateway
Reading Route table
Desitnation: Where your traffic wants to reach
target: how will you reach that destination, via which device/route
Ex:
A.
Public subnet
Destination: 0.0.0.0/0
target: igw-3979037b
this means to reach mentioned destination, you will have to route the traffic via the target via internter gateway in this case.
B.
Private Subnet
Destination: 172.31.0.0/16
Target: local
This means for this particular private subnet, you would need to route traffic locally inside vpc to reach to the mentioned CIDR.
In case of VPC endpoint, the destination is the VPCE-xxxxxx which indicates traffic from this VPC endpoint will go to target that you set ex:
target: VPCE-afbqfq3u
destination: pl-13edqd12(this can be your resource endpoint such as S3 endpoint) This is usually the CIDR range
So destination is usually your resource ip which would receive the traffic, and target is the equipment through which these recourse will get the traffic.
3 Types of Endpoints
-
Gateway Endpoints: Supported by S3 and DynamoDb
-
Interface Endpoints: almost all other services expect S3 and Dynamo supports interface endpoint.
-
Gateway Load Balancer Endpoints
Using EC2 role, we can make direct CLI commands from the EC2 which would use that Ec2 role instead of the configured credentials.
VPC Endpoint Policies
When we create an endpoint to access some services such as S3, we can create an ‘VPC Endpoint Policy’ which would make sure that the access to the resources being accessed via the VPC endpoint is secure and restricted. Otherwise, using VPC endpoint without a policy would make all S3 buckets and many more resources accessible without restriction which we don’t want.
Hence using endpoint policy, we can mention as to which resource should be accessible and which should not ex Bucket_A is accessible and Bucket_B should not be accessible, this can be managed using VPC Endpoint Policies.
Interface Endpoint
Interface Endpoint is an network interface with private IP and serves as an entry point for traffic destined to a supported AWS service such as S3 or any VPC Endpoint.
This uses network interface with a policy. This network interface stays inside the subnet and gets Ip from the subnet.
To use this, create interface endpoint and attach this interface endpoint to your EC2. Now this EC2 can use interface endpoint to access services such as S3 privately since this is endpoint connection.
NACLs and Security Groups
Security groups are attached to ENI (elastic network interface), so if a service has ENI, then security group can be associated to it. There are lot of services that use EC2 indirectly and hence SG can be applied to them also , such as :
-
Amazon EC2 instances
-
Services that launch EC2 instances:
-
AWS Elastic Beanstalk
-
Amazon Elastic MapReduce
-
-
Services that use EC2 instances (without appearing directly in the EC2 service):
-
Amazon RDS (Relational Database Service)
-
Amazon Redshift
-
Amazon ElastiCache
-
Amazon CloudSearch
-
-
Elastic Load Balancing
-
Lambda
For creating outbound rules in NACLs, please make sure to block the ephiemeral port that the requester would be using. Ex: If external server sends a connection request on our ec2 port 22, the request would also mention the port of the requester on which the data from my ec2 should go to that port of the requester. Now since this requester’s port is usually random, while creating outbound rule, make sure to allow/deny the requesters port. So for being safe, usually outbound rules mention port as 0-65666 since that would cover all ports that requester might request. However for inbound rule, the rule is made as usual with port mentioned such as 22, 443 depending upon the service.
My Ec2 (open port 22)<—————–> requester’s pc (open all ports since this server might be requesting connection of a random port such as 12312)
StateFull Firewall:
In this type of firewall, ex Security Group, you dont need to create an outbound rule.
Statefull firewall remembers the state, ie if a connection was initiated from outside on port 22, the server remembers this request and allows port 22 in the return traffic as well and we dont need to create a rule explicitly for this. however in case of Stateless firewall, the firewall does not remember the state and hence we need to open the port return the traffic as well like in NACL where we have to create both inbound and outbound rules.
destination Ec2 <——————- Client
192.12.1.1:22 (client requesting connection on port 22 to our EC2)
——————–>
173.12.54.5:51077 (client request our EC2 to communicate back on port 51077, so make sure NACL has this ephimeral port open however not needed to open explictly in case of Security groups since they are statefull and remember the connection)
We dont really know which port the client would use to communicate the return traffic, hence outbound has all port open in case of NACL
IN case of stateless ie security group, if we only have inbound rule opening port 22 and no outbound rules, then also the traffic will be allowed to go out since the firewall being statefull would remember this connection and will not need outbound rule to send the traffic. Hence even if there is no outbound rule on security group, and only if inbound rule exists, then also connection is done.
Inbound rule means the rules we create when the traffic comes inside our Ec2, hence inboud is for connections initiate by a client
OutBoundRule: This is for connections initiated by the EC2 itself.
If i only create inbound rule opening port 22 to my Ec2 and delete all outbound rules, then from my laptop I can still ssh to my EC2 even though there is no outbound rule since SG is statefull and so doesnt need a separate outbound rule to return the traffic.
However, if I now try to ping google from the ssh’d EC2, it fails since there are no outbound rule which is for allowing connections initiate by my EC2, hence ping fails if no outbound rule is present in SG.
So if your server only gets ssh connection from outside and never itself initiates a connection, then delete all outbound rules in SG so that Ec2 cannot itself initate a connection untill externally initiated.
This is why the outbound rule of SG is open to all since we trust connections initiated fm within the EC2 and want to accept all the return traffic if connection was initiated frm EC2.
So inbound and outbound rules are about who initiated the traffic. IF external server initiate the traffic, for SG the inbound rules kicks in and if the Ec2 itself initiates the connection, then outbound rule kicks in and they both act independtly to each other.
Stateless Firewall:
Ex: NACL
In this case, we need to create inbound and outbound both rules needs to be created.
In case of stateless, we’ll have to make sure both inboud and outbound is created for complete connections.
CDF- CloudFront distribution
-
Uses edge locations to deliver data faster via caching
-
We can setup rules to block traffic based on geoLocation for our CDN that we have created, restricting access to our content for some countries.
-
It can integrate with WAF and Sheild for protection
-
Need to select ‘Retrict viewer type’ to ‘yes’ and select ‘trusted key groups’. This will make all the content accessible only via CDN’s signed URL and no content can be accessed without signed URL.
CloudFront Origin Access Identity (OAI : This is to restrict access to only CloudFront’s link and no direct access via s3)
This ensures that only the users accessing the contents via this CDN network are able to access it while any direct access to resource such as S3 is not possible.
While setting up the CDN for S3, the option to select OAI comes up, just tick that option to make sure your website is not accessible directly via s3 and should be accessible only via CDN.
Remeber to select ‘Yes, Update the bucket policy‘ which would implement a bucket policy so that only CDN can access bucket and nothing else.
Basically when we have some data on S3 which we have to share publicly, we have to make the bucket public however that is not safe. So rather than making bucket public to world and giving everyone our S3 bucket DNS, we can rather use OAI. OAI is a method which says that we dont need to make bucket publicly accessible to everyone. rather, we can use CDN’s link and give our CDNs link to everyone and then only this CDN should be allowed to make GetObject api call to interact with bucket, hence with this way, our bucket doesnt get random requests from all around the world but rather only gets request from our CDN and CDn can manage the security.
This is done via updating the bucket policy which only allows CDN to read S3 data and no one else:
{
“Version”: “2012-10-17”,
“Statement”: {
“Sid”: “AllowCloudFrontServicePrincipalReadOnly”,
“Effect”: “Allow”,
“Principal”: {
“Service”: “cloudfront.amazonaws.com”
},
“Action”: “s3:GetObject”,
“Resource”: “arn:aws:s3:::DOC-EXAMPLE-BUCKET/*”,
“Condition”: {
“StringEquals”: {
“AWS:SourceArn”: “arn:aws:cloudfront::111122223333:distribution/EDFDVBD6EXAMPLE”
}
}
}
}
CloudFront Signed URLs (Signed URLs is for special case where you want to send a unique link to fetch the data)
Signed URLs are special types of URLs generated for CDN networks. These URLs are not the same as the DNS’s domain name given when we create the CDN. If signed URL is on, the normal access request via the CDN’s URL gets access denied mentioning ‘missing key-pair’. THis is because once we have mandated use of signed URLs, non signed URLs would be rejected.
Basically this is the way to generate unique links for customers and allow them on time download such as link these days we generate link on websites and they allow us to download the content for some time and then they expire, that is an example of using Signed URLs which comes with expiries.
This is enabled by ticking ‘Restrict Viewer Access’ which make sures that user can only access the contents using the signed URL/Cookies.
Now to generate the signed URL use this CLI command :
$ aws cloudfront sign –url “https://ajksqeui.cloudfront.net/admin.txt” –key-pair ABJKIQAEJKBFCB –private-key file://pk-ASKJVFAVFAUO.pem –date-less-than 2022-12-11
This will return a very big and complicated URL which could be pasted in the browser and hit enter to access the content.
Now this command will associate the keypair with this URL and now when accessed with this URL, we wont get traffic rejected since this time the URL is signed with keys.
Ex: Lets say someone used your website to buy a song, now you want only that person to be able to access the song, so you will create a signed URL for that song’s url ex:
$ aws cloudfront sign –url “https://songtresures/downloads/terebina.mp3″ –key-pair ABJKIQAEJKBFCB –private-key file://pk-ASKJVFAVFAUO.pem –date-less-than 2022-12-11
This will return a singed URL which we can then mail to our customer allowing only that paid customer to use a unique link to download the content.
To include SignedURLs in your CDN, while deploying the CDN, just make sure to tick ‘Retrict viewer type’ to ‘yes’ and select ‘trusted key groups’
The keys needs to be generated from CloudFront’s dashboard to use them for signing the link. After this, navigate to key group and add the key to the key group.
Fielf Level Encryption
This encryption is provided by CloudFront which enables us to encrypt only a specific field / part of the data from the entires data. Ex from credit card details, only encrypt card number while other info can be non-encrypted.
______________________________________________________________________________________________________________________
AWS Shield (DEfence against DDOS attacks)
-
Shield Standard
-
Shield Advance
Shield Advance provides :
-
defence against sophisticated DDOs attacks
-
near real-time visibility into the ongoing attack
-
24*7 access to AWS DDOS Response team (DRT) during ongoing attacks
-
Bills at $3000/month per org
-
Cost protection: If during the attack, the infrastructure scaled, then AWS will reimburse the amount in form of credits. This is called “Cost Protection”
-
Layer 3/4 attack notification and attack forensic reports and attack historical report (layer 3/4/7)
-
AWS WAF is included at no extra cost
DDOS Mitigation:
-
Be ready to scale so that service scales up in case of attack and server doesn’t breakdown. Autoscaling can help.
-
Minimize the attack surface area ie decouple the infra like database being in private and frontend being public subnet
-
Know what traffic is normal and what is non-normal.
-
Be ready for a plan in case of DDOS attack
_____________________________________________________________________________________________________________________
API Gateway
-
We can quickly create an API using api gateway and select a method like GET and integrate it with a lambda as trigger.
-
Now we can use the invoke URL given by the API gateway service, enter it in browser and hit enter. This will send a GET request to our api which would further trigger the lambda.
-
Since we can select REST API also, we will also get a response sent by lambda to the api.
______________________________________________________________________________________________________________________
S3 and Lambda
-
We can setup our lambda and in the trigger, we get a choice to directly use S3 as a trigger.
-
This way of selecting S3 as direct trigger to lambda would not require SNS and and SQS. This would use S3 and direct trigger for lambda.
___________________________________________________________________________________________________________________________
SSH Key Pairs
-
The key pair that we select while deploying the EC2 is stored in ‘/.ssh/authorized_keys’ folder
-
If we delete the keypair from AWS, that key pair still remains inside EC2 and needs to be deleted
-
If we create an EC2 from existing AMI, the key present in AMI will be populated inside EC2
___________________________________________________________________________________________________________________________
Ec2 Tenancy
-
Shared: The EC2 instance runs on shared hardware. Issues: Security
-
Dedicated: Ec2 instance runs on hardware which will be share between resources of the same AWS account. Hence 1 acccount – 1 hardware. So all Ec2 running on our host will be ours.
-
Sometimes some license are tried to hardware such as Oracle license, so we would not want to change our hardware after start-stop, here is were dedicated hosts comes in which would let us have same hardware.
-
-
Dedicated Hosts : Instance runs on a dedicated host with very granular level of hardware access
___________________________________________________________________________________________________________________________
AWS Artifacts (A document portal for compliance)
-
The artifact port provides on-demand access to AWS security and compliance docs, also know as artifacts.
-
This is required during audits when auditors asks proof if AWS services are compliant to PCI DSS, HIPA etc
___________________________________________________________________________________________________________________________
-
Lambda@Edge allows us to run lambda functions at the edge location of the CDN allowing us to do some processing/filtering of the data being delivered by the CDN.
-
There are 4 different points in communication where the lambda can be used. Viewer Request ->Origin request->Origin response->Viewer Response. This is called cloudfront cache. At any of these 4 points we can implement a lambda.
___________________________________________________________________________________________________________________________
DNS Attributes of VPC
-
This is enabled by default in VPCs. This allows our Ec2 to get a DNS name. This is done after ‘enableDNSHostNames’.
-
Enabling ‘EnableDnsSupport’ will allow a public DNS name for the ec2 and also will allow the amazon nameservers to redirect the traffic to this DNS. Disabling this will give DNS name if ‘enableDNSHostNames’ is enabled but the resolution will not happen.
___________________________________________________________________________________________________________________________
DNS Query Logging
-
R53 Query logging enables us to log all the DNS queries made to our application. Query logging can be enabled inside our Hosted Zone settings for both public and private hosted zones. When an application makes request to reach to an external website via R53, the logs contains below fields:
-
Domain requested
-
Timestamp
-
Record Type
-
response sent etc
-
-
Destination of the logs: CW log group, S3 or kinesis.
-
For this the ‘query logging’ has to be enabled
Step Functions
-
These are set of lambda functions which depend upon one another.
-
So maybe we want that lambda 2 should run after lambda 1, use step function to first run lambda1 and then lambda 1 will execute lambda 2.
AWS Network Firewall (The main firewall For VPC, unlike WAF which protects ALB, CDN, APi only)
AWS Firewall is a statefull, managed firewall by AWS which provides intrusion detection and prevention service for our VPC.
AWS Network Firewall works together with AWS Firewall Manager so you can build policies based on AWS Network Firewall rules and then centrally apply those policies across your VPCs and accounts.
AWS Network Firewall includes features that provide protections from common network threats. AWS Network Firewall’s stateful firewall can:
-
This is the main firewall which would protect all our resource inside a VPC. RIght now we can create only NACLs to secure our Subnets, and SGs work at network interface level, while WAF only protects ALB, CF(CDN) and API GW. Network Firewall is the actual firewall that would protect the VPC from malicious traffic.
-
Filtering and Protocol Identification : Incorporate context from traffic flows, like tracking connections and protocol identification, to enforce policies such as preventing your VPCs from accessing domains using an unauthorized protocol. We can also block based on TCP flags.
-
Active traffic Inspection: AWS Network Firewall’s intrusion prevention system (IPS) provides active traffic flow inspection so you can identify and block vulnerability exploits using signature-based detection.
-
Blocking Domains: AWS Network Firewall also offers web filtering that can stop traffic to known bad URLs and monitor fully qualified domain names.
-
Statefull and Statless Ip/Domain Filtering: We can also do stateless IP filtering and statefull IP filtering. We can also upload a bad domain list so that no resource from our VPC connects to the same. Both stateless and statefull are supported.
-
VPC Level: Since this service associates firewall at VPC level, we can make a uniform deployment of this service such that all VPC gets uniform security. It also integrates with AWS Firewall Manager service which works at Org level. This will help in streamlining the firewall controls over VPC throughout the org. So if some IPs and domains needs to be blocked at Org level, use AWS Network firewall and associate the rule to all the VPCs. This uniform deployment of firewall cannot be achieved with any other service which could block IPs and domains both.
-
Sits between resource subnet and its Internet Gateway: After you create your firewall, you insert its firewall endpoint into your Amazon Virtual Private Cloud network traffic flow, in between your internet gateway and your customer subnet. You create routing for the firewall endpoint so that it forwards traffic between the internet gateway and your subnet. Then, you update the route tables for your internet gateway and your subnet, to send traffic to the firewall endpoint instead of to each other.
Earlier :
IGW <——>Subnets<—–> AWS resources
Earlier the traffic used to route from subnets inside which the resources are present to IGW. IGW<——->Network Firewall<—–>Subnets<—-> AWS Resources
Now the Network Firewall will be in between the subnets and IGW and we amend the route tables of the IGW and subnet so that instead of exchanging traffic directly, the traffic should go though the firewall first from subnet and then from firewall it should go to IGW.
Normally if we have to block some IPs from certain VPCs, we have to use Splunk to first ingest the VPC flow logs and then cross match those logs with Bad IP list. This can be easily done using Network Firewall since it allows us to upload a list including domains and IPs that needs to be restricted.
High availability and automated scaling
AWS Network Firewall offers built-in redundancies to ensure all traffic is consistently inspected and monitored. AWS Network Firewall offers a Service Level Agreement with an uptime commitment of 99.99%. AWS Network Firewall enables you to automatically scale your firewall capacity up or down based on the traffic load to maintain steady, predictable performance to minimize costs.
Stateful firewall
The stateful firewall takes into account the context of traffic flows for more granular policy enforcement, such as dropping packets based on the source address or protocol type. The match criteria for this stateful firewall is the same as AWS Network Firewall’s stateless inspection capabilities, with the addition of a match setting for traffic direction. AWS Network Firewall’s flexible rule engine gives you the ability to write thousands of firewall rules based on source/destination IP, source/destination port, and protocol. AWS Network Firewall will filter common protocols without any port specification, not just TCP/UDP traffic filtering.
Web filtering
AWS Network Firewall supports inbound and outbound web filtering for unencrypted web traffic. For encrypted web traffic, Server Name Indication (SNI) is used for blocking access to specific sites. SNI is an extension to Transport Layer Security (TLS) that remains unencrypted in the traffic flow and indicates the destination hostname a client is attempting to access over HTTPS. In addition, AWS Network Firewall can filter fully qualified domain names (FQDN).
Intrusion prevention
AWS Network Firewall’s intrusion prevention system (IPS) provides active traffic flow inspection with real-time network and application layer protections against vulnerability exploits and brute force attacks. Its signature-based detection engine matches network traffic patterns to known threat signatures based on attributes such as byte sequences or packet anomalies.
Alert and flow logs
Alert logs are rule specific and provide additional data regarding the rule that was triggered and the particular session that triggered it. Flow logs provide state information about all traffic flows that pass through the firewall, with one line per direction. AWS Network Firewall flow logs can be natively stored in Amazon S3, Amazon Kinesis, and Amazon CloudWatch.
Central management and visibility
AWS Firewall Manager is a security management service that enables you to centrally deploy and manage security policies across your applications, VPCs, and accounts in AWS Organizations. AWS Firewall Manager can organize AWS Network Firewall rules groups into policies that you can deploy across your infrastructure to help you scale enforcement in a consistent, hierarchical manner. AWS Firewall Manager provides an aggregated view of policy compliance across accounts and automates the remediation process. As new accounts, resources, and network components are created, Firewall Manager makes it easy to bring them into compliance by enforcing a common set of firewall policies.
Rule management and customization
AWS Network Firewall enables customers to run Suricata-compatible rules sourced internally, from in-house custom rule development or externally, from third party vendors or open source platforms.
Diverse ecosystem of partner integrations
AWS Network Firewall integrates with AWS Partners for integration with central third-party policy orchestration and exporting logs to analytics solutions.
Course Notes by Zeal DOWNLOAD LINK:
Traffic Mirroring
AWS allows us to mirror the packets being sent on a Network Interface and send it a location which we can leverage for investigating the network traffic or what is the content of those packets.
Traffic Mirroring is an Amazon VPC feature that you can use to copy network traffic from an elastic network interface of type
interface
. You can then send the traffic to out-of-band security and monitoring appliances for:-
Content inspection
-
Threat monitoring
-
Troubleshooting
The security and monitoring appliances can be deployed as individual instances, or as a fleet of instances behind either a Network Load Balancer with a UDP listener or a Gateway Load Balancer with a UDP listener. Traffic Mirroring supports filters and packet truncation, so that you only extract the traffic of interest to monitor by using monitoring tools of your choice.
EBS encryption: There is no direct way to encrypt existing unencrypted EBS volumes or snapshots, you can encrypt them by creating a new volume or snapshot.
AWS Organizations
AWS org allows us to manage multiple AWS accounts, it can be used to create new accounts, have a single billing and all features allows you leverage some services and some of their features at org level such as Org wide Cloudtrial trails. Different accounts can be put into different OU(Organization Unit). Policies applied on parent OU is inherited by child OUs.
SCP: Org also provides SCPs which can be used to deny certain actions at account level. Even root account is affected by SCP and would get deny on actions if SCP denies it.
Tag Policies: Allows us to standardize the tags on our resources. We can create a policy here which would mandate the tag and the format of the tag can also be defined.
AI opt out: Use this policy to opt out from AWS AI services storing our data for ML related tasks and optimizations.
Backup Policies: We can use this helps maintains consistency as to how are backups managed in our org.
Org has following 2 types:
-
All Features:
-
Consolidated Billing: Management account pays for all child accounts
Resource Based Policy Vs Identity Based Policy
Resource policy: These are directly attached to a resource instead of being attached to IAM user or role. ex: S3 bucket policy, SQS Access policy etc
Identity Based Policy: These are directly attached to IAm user/role.
How is access decided if resource also has resource based policy and user also has policy?
Ans. If resource and user both have policies, then summation of all the actions which are allowed on the resource policy and the identity policy would be allowed.
Its not required that both policies showed allow the action and only then it will work. If either the resource policy or the identity policy allows it, then the user would be allowed to make the api call.
Ex:
Below is an example in which there are 3 users and their policies are mentioned, also there is a s3 bucket whose bucket policy is mentioned below.
S3 bucket policy of Bucket A:
Alice: Can list, read, write
bob: Can read
John: Full access
Identity based user policies:
Alice: Can read and write on Bucket A
Bob: Can rea and write on Bucket A
John: No policy attached
Result when these users try to access the Bucket A:
Alice: Can able to list, read and write
Bob: Can read and write
John: Full access
So as we can observe, it doesn’t matter which policy is giving access, both policy at the end are added together to confirm the access and the combined policy is then used to evaluate the access to the resource
So
resource policy + identity policy = full permission policy that the user will get
this is for same account access, however for cross account, both policies should have permissions for the action.
-
Objects that are uploaded from a different AWS account to a central S3 bucket in the main account, those objects might not be accessible by the users of the central AWS account due to object ownership issues. The object ownership needs to be defined so that the users/roles in the AWS account which owns the central bucket are also able to access the objects of the bucket.
Practicals:
-
A user ‘temp’ created with no IAM policy.
-
A new key created, in the key policy, first the user was not mentioned.
-
The user’s encrypt and decrypt both api calls fail since non of the policies are giving him access
-
then ‘encrypt’ permission was added to the IAM user policy and the KMS policy remains same with no mention of temp user. this time encrypt worked but not decrypt
-
Now, decrypt permissions are added only to the kms key policy but not added to temp user. Now both encrypt and decrypt works.
-
This means, like mentioned above, when AWS evaluates permissions, it sums up the permissions from IAM policy and KMS policy, and the net access depends on what both give together. So if encrypt is in IAm policy and decrypt in KMS policy, the user will have summation of both policies as permission, so it will be able to make encrypt and decrypt both calls.
-
This could be bit risky since even if KMS policy mentions nothing about the IAM user, yet just by setting up its own permissions, the temp user will have complete access to the key just by mentioning them in its IAM access policy.
-
One way to get rid of this would be remove the “Sid”: “Enable IAM User Permissions”, or add specific user ARN instead of “AWS”: “arn:aws:iam::456841097289:root”, since root would allow all users and roles to access the key just by using their IAm permissions.
So below statement in the KMS key policy allows IAm users to use their IAM policies to gain access over kms key even though the KMS policy itself might not give them permissions.
{
“Sid”: “Enable IAM User Permissions”,
“Effect”: “Allow”,
“Principal”: {
“AWS”: “arn:aws:iam::456841097289:root”
},
“Action”: “kms:*”,
“Resource”: “*”
},
If you don’t want IAM users to use their own policy to access the key and key should be only accessible when the KMS key policy allows it, then remove the above segment or at least mention a specific IAM user ARN whose IAM policy you might want to consider.
Not mentioning root will make sure that no one is able to access the key if the KMS key policy itself is not allowing it. But please keep inmind that this might make key unmanageable and then only root might be able to recover it.
External ID
-
An external ID can be used to make cross account role assumption secure.
-
IN the trust policy of the role being assumed (destination account), the condition section of that trust policy would have :
“Condition”: {
“StringEquals” : {
“sts: ExternalID”: “128361”
}
}}
Hence while assuming the role, make sure to mention this external ID else the role assumption fails at source accountdue to no external Id provided.
$ aws sts assume-role –role-arn arn:aws:iam:0123801380:role/developerRole –-external-id 128361 role-session-name temp
The above command would be successful in assuming the DeveloperRole since it provides the external ID.
-
While setting up the role, we get “Options” section while gives an option “Require External ID” and “RequrieMFA”, ticket on the needed option of “required external ID” on ‘create role’ page.AWS STS
-
STS service is responsible to fetch temporary tokens to give temp access.
-
STS provides Access Key, Secret Access Key and Session Token and this can be used to get gain access.
AWS Federation and IDP: here
-
Microsoft Active Directory: Allows us to store users at a single location. Just go to AD and click on new-> new user, this creates a new user as part of the AD
-
This AD can then be associated to our AWS SSO to give these users access.
-
Federation allows external Identities to have secure access in your AWS account without having to create IAM users
-
These federated external identities can come from:
-
Corporate Identity Provider ex: AD, IPA
-
Web Iden Provider. Ex: Fb, Google, Cognito, OpenID etc
-
Identity Broker
Identity broker is the intermediate who first allows users to login via their own creds to the identity provider. Then the IDp (identity provider) would cross check the details with MircrosoftAD, if the creds match, the IDp would accept the login.
Now that Idp has confirmed the user, it will now reach out to AWS STS to fetch Access keys and tokens for the session, these details are fetched by Idp and then forwarded to the user trying to authenticate, this is called Auth Response. As soon as these keys and tokens are received by the user as Auth Response, the user would automatically be redirected to aws console using those creds, this is ‘sign in with auth response token’
-
So its like a Cisco employee wearing Cisco Badge goes to Amazon office and wants to enter amazon
-
Cisco Employee show if cisco’s ID card and ask the gaurd to enter into Amazon’s building
-
Since the ID card is not of Amazon, Amazon’s guard will send his employee to verify with cisco if I am really a cisco employee
-
Cisco would confirm, that Yes I am cisco employee trying to get into Amazon’s office for some quick work.
-
Now Since this is confirm that I am cisco’s employee, the Amazon’s gaurd will create a temp Idcard for me by asking permissions from the main Amazon building which I need to enter
-
Now I can enter amazon’s building using temprory credentials. This is how federated login works. If we want to login to Amazon using google’s creds, amazon’s console will first check with Google to verify my identity and then give me temporary credentials which I can use to explore amazon. The guard who does all the verifications and gives me temp Id card is Identity provider. Identity store would be Cisco’s office where my actual identity is stored.
Identities: The users
Identity broker: Middlerware that takes users from point A and help connect them to point B
Identity Store: Place where users are present. Ex: AD, IPA, FB etc
Identity Provider: An example of identity provider that we can add to AWS ‘Identity Provider’ in IAM is Okta. Okta is very famous identity provider. okta would provider us the metadata and SAML data to help login.
steps:
-
users login with username and password
-
this cred are given to Identity Provider
-
Identity Provider validates it against the AD
-
If credentials are valid, broker will contact the STS token service
-
STS will share the following 4 thing
Access Key + Secret Key + Token + Duration
-
Users can now use to login to AWS console or CLI
AWS SSO
AWS SSO allows admins to centrally manage access to multiple AWS accounts and business applications and provide users with single sign-on access to all their assigned accounts and applications from one place.
So all the users need to do is signIN into AWS SSO and from here they can be redirected to a page with different applications such as AWS console, Jenkins, Azure etc to select from, once we selects the application, we are redirected to the selected app.
AWS CLI also integrates with AWS SSO, so we can use CLI to login using SSO.
In case of SSO, there are 3 parties:
-
User: This is the user trying to login to AWS
-
Identity Provider: IDP is the 2nd party which stores all the user details, and allows it to gain access over AWS
-
Service: Service is the party on which you are trying to gain access over. IN this case it would be AWS console
Setup Requirements:
-
AWS SSO requires Org enabled
-
AWS SSO can be integrates with IAM users and groups. Also, it can integrate with external user identity pools like Microsoft AD to source the users from.
-
User portal URL would be the url of the portal that users use to enter creds and login
-
Permission sets are used to setup permissions for the users logging in. Users are added to user groups and these permission sets are associated to these groups to make sure the users in group have required permissions.
-
If required, we can then integrate AWS CLI with SSO. users will required to user –profile and use the SSO profile while signing with CLI.ex $ aws s3api viewBuckets — profile SSO
SAML Flow
-
The flow starts when the user opens up the IDP URL, it can be Okta for example, and the user enters its credentials and selects the application to which it would gain access.
-
The IDP service will validate the creds at its end, so okta will cross check if creds are correct at okta end, if okta creds are entered correctly, okta will send back a SAML assertion as a SAML response
-
User now uses this SAML response to redirect to the SAAS signin page and Service provider ie AWS will validate those assertion.
-
On validation, AWS will construct relevant temp creds, and construct a user signIn temp link which is then sent to the user and the user is redirected to AWS console.
Active Directory
-
Active Directory is like a central server to store Users in a single location.
-
AD makes it easy to just manage users in a single location
-
Multiple other services can use AD to fetch user information instead of having to manage them on each app
-
AWS has its own called ‘AWS Directory service’
AWS Directory Service
-
This is AWS offering for Active Directory. It gives us 5 ways to setup or connect active directory to AWS
-
AWS Managed Microsoft AD
-
Simple AD
-
AD Connector
-
Amazon Cognito User Pool
-
Cloud Directory
-
-
AWS Managed Microsoft AD : Actual Microsoft AD that is deployed in Cloud. It has 2 options:
-
Standard Edition : Upto 500 users
-
Enterprise Edition: Large Org upto 5000 users
-
-
AD Connector: AD connector works like a middle-man/bridge service that connects your OnPrem AD directory to your AWS Cloud. So when user logs in into the application, the AD connector would redirect the singin request to OnPrem AD connector for authentication.
-
Simple AD: This is a compatible active directory for AWS Directory service which is similar to MicrosoftAD but not the same. Its like a similar alternate to MicrosoftAD. It provides authentication, user management and much more. AWS also provides backups and daily snapshot of the same. It does not support MFA and login via CMD/terminals. Its like a free version of AD. We can store username/password to store here and let engineers use SSH command and mention AD creds to SSH into the EC2.
Trust In Active Directory
Trust between domains allows users to authenticate from the user directory to the service. Trust can be of 2 types:
-
One-way trust: In this the direction of trust matters and access can be gained only in one direction.
-
Two-Way trust: In this domains from either side can access each other.
So lets say we have OnPrem Active directory and we want to leverage that to login into AWS. IN that case we can setup on-way trust between OnPrem Active Directory and AWS AD connector which is used to connect on-prem AD to AWS. The trust will now allow users to user OnPrem AD to authenticate with AWS AD connector and gain access to different AWS services.
S3 Buckets
Permissions on S3 buckets can be controlled by:
-
Identity permissions: For AWS account access restriction. IAM policies associated to IAM users/roles can give access to S3 buckets.
-
Bucket Policy: Bucket policy not only restricts AWS entities, but also caters to the request originating from outside world such as for public S3 bucket. It can make restrictions based on:
-
Allow access from particular IP
-
Allow access from Internet
-
Only allow HTTPS connection
-
Allow access from specific IP etc
-
The access on S3 bucket is the sum of bucket policy + identity policy of the user. So either the bucket policy or the user policy, any one can give access to an S3 bucket. If both give access, then also no issues. So if user does not have IAm identity policy, he can still work on S3 bucket if S3 bucket policy is giving permission to the user. And if bucket policy doesnt give user permissions but the users identity policy gives access, then also that bucket would be accessible by the user:
Net Permissions on bucket = Bucket Policy + Identity policy permissions of the user
-
If there is no bucket policy, and block all public access is on, obviously an object such as a pdf on S3 bucket cannot be accessed from internet. It fails with below error:
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>NNRSBY9FR3QKX41G</RequestId><HostId>P0oMA9QGoyr+kj3SnuYd3wlVT6pDw2a95WHuem+zh+Iqk1YSWr11J8ATYbb26V11PHL+x7XvkyE=</HostId></Error>
-
Now if I disable block all public access, and bucket does not still have any bucket policy, then also the object would not be accessible and I would get the same error as above while trying to access the pdf from internet. Although the ‘Access’ Prompt on the bucket would start to show ‘Bucket Can be Public’. However with ‘block all public access’ as ‘off’ and no bucket policy, the objects still cant be accessed over internet. It fails with above error. This means just making ‘block public access’ as off doesnt really make the objects public.
-
However, as soon as I add below bucket policy after making ‘block all public access’ to off :
{ “Version”: “2012-10-17”, “Statement”: [ { “Sid”: “Statement1”, “Effect”: “Allow”, “Principal”: “*”, “Action”: “s3:GetObject”, “Resource”: “arn:aws:s3:::triallajbsfljabf/*” } ]}
This time the object loaded perfectly because now public access is allowed and finally the bucket policy allows it.
-
Now lets say there is no bucket policy and the ‘block all public access’ if also off. As seen above, although this shows ‘Bucket Can be Public’ but if there is no bucket policy as above allowing access to “Principal”: “*”, this would mean the objects are still not accessible as mentioned above.
-
This kinda means that without bucket policy, the objects cant be open to world, but there is a catch, here, Object ACLs can be used to give access to public.
-
So if there is no bucket policy, that doesnt mean that object cannot be made public. We can use the Object ACL to make is readable to the world without having any bucket policy.
-
However, if we go to the object and try to edit the ACL, it would not allow us and would show a message:“This bucket has the bucket owner enforced setting applied for Object Ownership:When bucket owner enforced is applied, use bucket policies to control access.”
-
This means, if we want to control the bucket object permissions via ACLs and not bucket policy, in that case first:
-
Go to S3 bucket —-> Permissions
-
Scroll down to “Object Ownership”. It mentions “This bucket has the bucket owner enforced setting applied for When bucket owner enforced is applied, use bucket policies to control access.” Click “Edit”
-
Now after clicking edit, you are taken to the “Edit Object Ownership” page where we can enable object ACLs.
-
Here select “ACLs Enabled”. It shows “Objects in this bucket can be owned by other AWS accounts. Access to this bucket and its objects can be specified using ACLs.”. This means we are now allowing ACLs to set permissions on objects. —-> “I acknowledge”—>Choose “Bucket owner preferred” –> Click on “Save Changes”. Now your objects can use ACLs to give permissions. Hence ACLs needs to be enabled first before giving permissions to objects via ACLs.
-
After enabling, now go back to object and select the object.
-
Navigate to “Permissions” of object selected —> This time the “Access Control List” dialog box will have ‘Edit’ enabled. —> Click on “Edit”
-
Finally Choose Everyone (public access) –> Object and check the ‘Read’ option. And save.
-
Now if you try to access the object from internet, it would be accessible without bucket policy.
-
-
Conclusion:
To make objects Public in an S3 bucket there are 2 ways, however in both ways, the Block all public access needs to be ‘off’, since if this is ‘on’ no matter what we use, the objects would not be public, hence while triaging to understand if objects are public, first check the Block All Public Access is ‘On’, this means the public is blocked no matter what object ACL say or bucket policy says. Hence first requirement to make objects public is “Block All Public Access” needs to be ‘Off’, then comes below 2 ways to give access:
-
Bucket Policy : The bucket policy mentioned above can be applied to the bucket after having ‘Block All Public Access’ to off. Without bucket policy, the objects cant be accessed.
-
Object ACLs: If we are not using bucket policy and ‘Block All Public Access’ is off, then we can use Object ACLs to make objects public. But this requires 2 things, first make “ACLs Enabled” in bucket permissions. Then go to object and edit its ACLs and give “read” access to Everyone (public access).So this means to make objects public, there are 2 level of security. First is “Block All Public Access” which needs to “Off” to make bucket public. And then second level access is more of object/folder specific where we can either use a bucket policy or Object ACL to give public access.
So “Block All Public Access” is like outside door of house (S3 bucket) that needs to be opened to give access to inside rooms (objects). Now inside can be opened either via normal door knobs (Bucket Policy) or inside rooms can be opened using latch(object ACL), either of the two can be used to open the room.
Conditions:
1. For HTTPS only access restriction: condition “{“Bool” : { “aws: SecureTransport”: “true” }}
Cross Account S3 buckets
-
If your central S3 bucket is in account_A and your are sending logs/files from Account_B and Account_C. The role of account_B and account_c needs to have “GetObjectACL” api call permissions in its IAM policy. And since this is cross account, the S3 bucket policy would also need to be updated to allow “GeObjectACL” api call without which the roles wont be able to send logs cross account. Ex: Like Kinesis Firehose sends kinesis data via stream and the role attached to KInesis Firehose needs to have “GetObjectACL” permissions, further this role needs to be added to bucket policy with getObject and GetObjectACL permissions.
Canned ACL Permissions
-
ACLs actually generate and associate grants to different objects. So if we enable “bucket-owner-full-control” on an object and make the below mentioned api calls, it actually would return 2 Grants, one for the bucket owner giving him permissions and the other for Object owner.
aws s3api get-object-acl –bucket s3://trialbucket235132 –key canned.txt
returns:
owners {
.
.
.
}
Grants :
{
-
-
-
for object owner
-
for bucket owner
-
-
}
ACL Name Description
i. Object Owner (Private) : Object Owner gets Full Control, no one else have access(default)
ii. Everyone (Public Access) : Public-read : Object Owner has Full Control and all others have read permissions
iii. bucket-owner-read : Owner of the object has full_control on the object and the bucket owner has read permissions
iv. bucket-owner-full-control : Both the object owner and the bucket owner gets full-control over object. This is the way to store data at centralized S3.
Hence while sending objects from account from Account_b to Account_A, make sure that:
-
The iam role sending the logs to destination bucket has “PutObjectACL” and “getObjectACL” permissions added to the IAM policy of the role.
-
The destination bucket allows the above role to make “PutObjectACL” and “getObjectACL” api calls in its bucket policy
-
The ACL of the object being sent can be chosen while sending the object, hence in the cli command while sending the logs cross account, make sure to append –acl bucket-owner-full-control to make sure that the receiving bucket can view the objects also.
ex: aws s3 cp trialFile.txt s3://destinationbucketisthis -acl bucket-owner-full-control
Now when this file gets uploaded, the bucket owner would also have permissions to view the file and not just the uploader. This is why is most scripts while uploading they mention the –acl and mention the ACL. ACLs would only matter if the object is being sent cross account because bydefault the ACL gives permissions to the account who uploads the object and if the source and destination accounts are same, it would not matter then.
ACL also kicks in when objects are already present inside the bucket but some other accounts wants to access it. In that case the permissions needs to be changed to something like “Public-read – Object Owner has Full Control and all others have read permissions”.
Command to verify the ACL information of a specific object:
aws s3api get-object-acl --bucket mykplabs-central --key tmp-data.txt
Command to upload object with specific ACL
aws s3 cp canned.txt s3://mykplabs-central/ --acl bucket-owner-full-control
Pre-signed URL in S3
-
If let’s say we created a website to download only purchased songs, singed URL can be generated and sent to customers so that they can be download the content.
-
The main use-case here is that the bucket and object can remain private and not public, the signed URL would enable a guest user to access the private object even though they might not have an AWS account
-
Pre-Signed URL can be generated using CLI adding ‘prefix’ to the command. It also can add requires an expiry time.
Versioning
This enables 2 things
-
It helps us recover a deleted object
-
If we upload same file but inside it has different text, the previous one gets overwritten, however with versioning, it allows us to recover the previous file as initial version.
Versioning applies to complete bucket and not just some objects.
Once enabled, we can still suspend it. After suspending, the previous objects would still have the versions but the new objects would not have versioned objects.
S3 Cross regional Replication
Cross regional replication allows us to replicate a bucket into a different region which allows the architecture to be highly available (HA). Its compulsory for the destination and source buckets to have ‘bucket versioning’ enabled to implement cross region replication.
This is done by creating ‘replication rule’ in the source s3. Go to ‘Replication Rules’ of the S3 bucket and select the destination bucket, rule name, IAm role etc. We can choose destination bucket in the same account or even in different aws account. While configuring it would prompt to enable ‘versioning’ if ‘versioning’ is not enabled.
Now after configuring this, uploading any objects in the source region bucket would immediately copy the same object to a different region’s destination bucket.
Amazon S3 Inventory
Amazon S3 Inventory is one of the tools Amazon S3 provides to help manage your storage. You can use it to audit and report on the replication and encryption status of your objects for business, compliance, and regulatory needs. You can also simplify and speed up business workflows and big data jobs using Amazon S3 Inventory, which provides a scheduled alternative to the Amazon S3 synchronous ListAPI operation. Amazon S3 Inventory does not use the ListAPI to audit your objects and does not affect the request rate of your bucket.
Amazon S3 Inventory provides comma-separated values (CSV), Apache optimized row columnar (ORC)or Apache Parquetoutput files that list your objects and their corresponding metadata on a daily or weekly basis for an S3 bucket or a shared prefix (that is, objects that have names that begin with a common string). If weekly, a report is generated every Sunday (UTC) after the initial report. For information about Amazon S3 Inventory pricing, see Amazon S3 pricing.
You can configure multiple inventory lists for a bucket. You can configure what object metadata to include in the inventory, whether to list all object versions or only current versions, where to store the inventory list file output, and whether to generate the inventory on a daily or weekly basis. You can also specify that the inventory list file be encrypted.
You can query Amazon S3 Inventory using standard SQL by using Amazon Athena, Amazon Redshift Spectrum, and other tools such as Presto, Apache Hive, and Apache Spark. You can use Athena to run queries on your inventory files. You can use it for Amazon S3 Inventory queries in all Regions where Athena is available.
Object Lock
Object lock follow the WORM model WORM stands for ‘Write Once , read many” which means that if the object is written once, it cannot be edited or changed, however it can be read many times. Object lock only works with versioned bucket hence versioning needs to be enabled for this.
This makes sure that the data can be tempered with after being written. This helps in case of attacks such as ransomware during which the attacked would encrypt all the objects of the bucket. So if object lock is enabled, the attacker would be able to even encrypt the data since it cannot be changed.
Object Lock has 2 mode:
i. Governance Mode: this mode allows aws account with specific IAm permissions to remove object lock from objects
ii. Compliance Mode: In this mode, the protection cannot be removed by any user including root user.
MFA
We can implement MFA condition in the IAM policy so that of the user is trying to make an api call, they first need to login with MFA.
If user wants to make api call via CLI, in that case need to append –token ‘mfa code’ at the last of get-session-token command.
The ‘mfa code’ would be mentioned on your ‘authenticator’ app.ex: aws sts get-session-token –serial-number arn:aws:iam::795574780076:mfa/sampleuser –token-code 774221
Permissions Boundary
Its like the limiting policy which really doesnt give any access but restricts them. So if permissions boundary gives you EC2 readOnly access and your own IAM policy gives you admin permissions, then also you would have Ec2ReadONly access due to permissions boundary.
S3 and IAM
-
Bucket level access: arn:aws:s3:::mybucketsample.com\
2. Object Level access: arn:aws:S3:::mybucketsample.com\*
We can also add this arn “arn:aws:S3:::mybucketsample.com*” although this would give access to bucket and object both together, it also includes any bucket whose name starts with mybucketsample.com…. and hence not recommended.
Version Element such as variable $username works with policy verison 2012 and not verison 2008, hence keep this in mind while creating a policy with version element variable.
AWS Control Tower : centrally manage SSO+CFN templates+compliance
-
Usually most companies have multiple AWS accounts to manage, maybe more than 100. The main things that are managed are:
-
Single SignOn option for users to be able to signIn to any account with single cred
-
Need single dashboard to view compliances of different AWS accounts
-
Would need CFN templates to deploy various services in different account which is manual work
-
-
Control Tower solves these requirements and many more by integrating with these services such as AWS SSO, Config and aggregators, CF Stacks, best practices, AWS Organizations etc
-
Control Tower makes sense in company who are newly deploying to AWS and want to scale env into multiple accounts without much fuss with CFN templates. Although usually companies would have their own SSO solution such as StreamlIne in Cisco and would rather choose to create a CFN that would deploy the architecture in few clicks. Why pay for extra cost.
AWS Service Role and AWS Pass Role
-
Service roles are used by services such as CFN to make api calls. IN case of CFN, if CFN doesnt have a role associated, then the API calls are made using permissions of the deployer. We get to choose a service role every time we are deploying a new stack.
-
PassRole
is a permission granted to IAM Users and resources that permits them to use an IAM Role.For example, imagine that there is an IAM Role called Administrators. This role has powerful permissions that should not be given to most users.Next, imagine an IAM User who has permissions to launch an Amazon EC2 instance. While launching the instance, the user can specify an IAM Role to associate with the instance. If the user (who is not an Administrator) were to launch an EC2 instance with the Administrators role, then they could login to the instance and issue commands using permissions from that role. It would be a way for them to circumvent permissions because, while not being an administrator themselves, they could assign the IAM Role to a resource and then use that resource to gain privileged access.To prevent this scenario, the user must be grantediam:PassRole
permission for that IAM Role. If they do not have that permission, then they will not be permitted to launch the instance or assign the role within other services. It gives them permission to pass or attach a role to a service or resource. -
So basically when we choose an IAM role while deploying the stack since we want that CFN to use permissions of that IAM role, for this purpose, the IAM user which we are currently using should have PassRole permissions since its like passing a role to the resource. Also, whoever passes the role gets access to the resource and can further exploit these permissions.
-
Examples when passRole happens:
-
While assigning an IAM role to the EC2. Now this IAM role can be used after logging into the EC2
-
While assigning role to the CFN template. Now this template can be used to create/delete resources based on passedRole permissions.
-
AWS WorkMail
-
Earlier it was very tough to manage your own mail server, it needs to be installed and configured
-
Not that advanced, just very basic UI
Encryptions and Cryptography
AWS Cloud HSM:
-
This is FIPS 140-2 Level 3 Validated
-
Managed by AWS for network and overheads
-
AWS CloudHSM automatically load balances the requests and securely duplicates keys stored in any HSM to all the other HSMs in the cluster
-
Using atleast 2 HSM in different AZ is recommended for HA
-
These clusters in which these HSMs are deployed, resides in VPC
AWS KMS
Encrypting Data in KMS:
aws kms encrypt --key-id 1234abcd-12ab-34cd-56ef-1234567890ab --plaintext fileb://plain-text.txt
Decrypting Data in KMS
aws kms decrypt --ciphertext-blob <value>
Ciphertext-blob would be returned after running encrypt command, this output is then sent as input with decrypt command to decrypt the cipher text.
KMS Architecture
-
KMS uses Cloud HSM at backend intensively
-
Our view/access terminates at KMS interface, behind that the KMS interface is connected to KMS hosts which are then connected to HSM and CMKs.
-
Maximum 4kb of data can be encrypted by CMK, for more than data, data key is required.
Envelope Encryption
-
AWS KMS solution uses an envelope encryption strategy with AWS KMS keys. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key. Use KMS keys to generate, encrypt, and decrypt the data keys that you use outside of AWS KMS to encrypt your data. KMS keys are created in AWS KMS and never leave AWS KMS unencrypted.
-
So to encrypt data
-
So first a customer master key is generated
-
From this CMK we request AWS to generate 2 keys, i. Cipher text data key ii. Plain text data key
-
The plain text key is used to encrypt the plain text data. The data is now called Cipher text data
-
The cipher text key is stored with the cipher text data. This Cipher text key itself is encrypted by the master key
-
-
While Decryption:
-
First we call the KMS decryption console/api and it returns us plain text key from our CMK
-
This plain text key is then used to decrypt the cipher text key and then this decrypted cipher text key is used to decyrpt the data which is stored with it.
-
Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.
Let’s see how envelope encryption works but before encrypting any data customer needs to create one or more CMKs (Customer Master Keys) in AWS KMS.
Encryption
-
API request is sent to KMS to generate Data key using CMK. ‘GenerateDataKey’
-
KMS returns response with Plain Data key and Encrypted Data key (using CMK).
-
Data is encrypted using Plain Data key.
-
Plain Data key is removed from memory.
-
Encrypted Data and Encrypted Data Key is packaged together as envelope and stored.
Decryption
-
Encrypted Data key is extracted from envelope.
-
API request is sent to KMS using Encrypted Data key which has information about CMK to be used in KMS for decryption.
-
KMS returns response with Plain Data Key.
-
Encrypted Data is decrypted using Plain Data key.
-
Plain Data Key is removed from memory.
When GenerateDataKey api call is made via cli, it returns 3 things.
This api call Returns a unique symmetric data key for use outside of AWS KMS. This operation returns a plaintext copy of the data key and a copy that is encrypted under a symmetric encryption KMS key that you specify. The bytes in the plaintext key are random; they are not related to the caller or the KMS key. You can use the plaintext key to encrypt your data outside of AWS KMS and store the encrypted data key with the encrypted data.
So basically:
-
First GenerateDataKey api call is made which gives 2 keys in return, both keys are same but one of them is encrypted key and the other one is the same key but encrypted which would be used to encrypt the plain text data.
-
Data is encrypted using plain text data key
-
Now since this plain text key is not secure, the plain text key is deleted. And the encrypted version of this key is stored with the data
-
now data is encrypted using the plain text data key, and the data key is itself encrypted using our main CMK.
-
Now when decryption is to happen, from the data, the encrypted data key is removed to find the details of the main CMK
-
This encrypted key which was stored with the data is first decrypted by KMS using CMK. Now since this key is now decrypted, it can now be used to decryp the data
-
So the data is decrypted and then the unencrypted key is deleted.
-
This is why it is called envelop encryption since our plain data is encrypted with the plain data key, and then the plain data key is encrypted with our CMK.
Deleting KMS Keys
-
it’s irreversible after deletion.
-
keys can be scheduled for deletion , default it 30 days wait period and range is 7 days to 30 days.
-
It can be disabled immediately
KMS keys has 2 parts, the key metadata and key material. Key metadata would have kms key arn, key id , Key spec, usage, description etc
AWS Managed key cant be deleted, or edited or modify permissions. Its rotated by AWS every 365. However CMK allows all this and needs manual rotation every 365 days.
Asymmetric Encryption
-
Asymmetric encryption uses public and private keys to encrypt and decrypt the data
-
Public key can be shared with public while private remains restricted and secret.
-
So if person A has to receive a message from me. I’ll use person A’s pubic key to encrypt the data. Now this encrypted data reached Person A and only they will be able to decrypt it since decryption will require their own private key.
-
Mostly used in SSH and TLS protocols. but also used in BitCoin, PGP, S/MIME
-
Below are the 2 uses on Asymmetric Keys in AWS that can be done:
-
Encrypt and Decrypt
-
Sign and Verify
Data Key Caching
-
Data key is needed for every encryption operation. Now if there are several different applications/api calls requesting encryption, in that case its very tough to keep on generating and using data keys and it also shoots up the cost.
-
As a solution, we can implement data key caching which would cache the data key after generating it so that it can be used in multiple different operations without every time generating it.
-
However, it is not very secure since storing data keys is not recommended.
If the CMK which was used to encrypt the EBS volume got deleted, that doesnt mean the EBS will now be useless. This is because when that EBS was attached to EC2, an decrypt call was made and so EC2 had the plain text file to decrypt it. So even if the KMS CMK gets deleted which encrypted the EBS, the EC2 will still be able to access the volume without issues. However, if the ebs is detached and then reattacched, in that case CMK will be needed, hence avoid detaching EBS from EC2 if the key is deleted.
KMS Key Policy and Key Access
-
If a key doesnt have any policy associated, then no one will have any access of that key. So basically no user, including root user and key creator have any access to the key unless specified explicitly on the key policy. This is different from S3 buckets where bucket policy can be left blank and users/roles can have access just with their own access policy.
-
when default kms key policy is attached to the key, it enables administrators to define permissions at IAM level but that doesnt mean every would get access. The users will still need to be have KMS specific permissions policy attached to the user/role.
-
THe KMS key policy actually has the first block which allows use of IAM policy by the users to access the key and in principal it mentions /root so that it applies to all users and roles.
-
Reduce Risk of Unmanageble CMK
-
Without permissions from KMS policy, even though IAM policy might give permissions to the user, the user wont be able to access/use the key if the key policy doesnt allow it. Hence permission is should be provided in both policies. This is unlike S3 where any one policy is enough to provide access.
-
A CMK becomes unmanageable if the users mentioned in the KMS key policy gets deleted. Now even if we create another new user with the same name as mentioned in the policy, that user will also not have access and only original user has access but now its deleted.
-
IN this case, just reach out to the AWS support who would help get access back to the key.
-
Avoid deleting the default policy which is with /root user to make sure it doesnt become unmanageable.
KMS Grants (Imp for Exam)\
Grant is like a token that is generated by grant user and can give different access to grantee.
Access to CMK can be managed in 3 ways:
-
using Key policy
-
Using IAm policies
-
Using Grants
Grant User: user who has access to CMK and has generated the grant
Grantee: the person who would be using the grant to access the key.
Generating grant gives back grant token and a grant ID. The grant token would be used by the Grantee to encrypt/decrypt api and the grantID can be used by grant user to revoke/delete the grant later on.
1. Generating the Grant
aws kms create-grant /
–key-id [KEY-ID] /
–grantee-principal [GRANTE-PRINCIPLE-ARN] /
–operations “Encrypt”
2. Using Grant to Perform Operation
aws kms encrypt –plaintext “hello world”
–key-id [KEY-ID] /
–grant-tokens [GRANT TOKEN RECEIVED]
3. Revoking the Grant
aws kms revoke-grant --key-id [KEY-ID] --grant-id [GRANT-ID-HERE]
The –operation part mentions what access is being given via Grant, if encrypt is mentioned then when the Grantee would be using the grant, he would be able to make Encrypt api call.
Importing Key Material
-
We can use our own key material while setting up the CMK. We have to use “External” in the ‘Advanced’ settings while creating the cmk first page.
-
We get wrapping key and import token. Wrapping key is used to encrypt the key material.
Cipher text blob: this is the section of the output that we get after encrypting data. the ciphertext blob section has the actual encrypted text while the other sections include key related info.
Encryption Context
-
ALL KMS operations with symmetric CMKs have option to send encryption context while encrypting some data. Now if encryption context was provided during encryption, when you would be doing decryption, you would need to provide the same data again.
-
This makes like a 2nd security layer since each data that is encrypted might be encrypted with same key but if we provide unique encryption context, while decrypting even though same key would be used, every decryption would require that unique encryption context sent. This is called as Additional Authentication Data or AAD.
-
Encryption context is provided while encrypting and it provided in key-value pair and has to be provided while decrypting the data.
# Standard
aws kms encrypt --key-id 13301e8a-7f95-4c00-bd73-6d0a20b03918 --plaintext fileb://file.txt --output text --query CiphertextBlob | base64 --decode > ExampleEncryptedFile
aws kms decrypt --ciphertext-blob fileb://ExampleEncryptedFile --output text --query Plaintext | base64 -d
## Encrypted Context
aws kms encrypt --key-id 13301e8a-7f95-4c00-bd73-6d0a20b03918 --plaintext fileb://file.txt --output text --query CiphertextBlob --encryption-context mykey=myvalue | base64 --decode > ExampleEncryptedFile
aws kms decrypt --ciphertext-blob fileb://ExampleEncryptedFile --output text --query Plaintext --encryption-context mykey=myvalue | base64 -d
Multi-Region Key
We can now create multi-region key instead of region specific keys.
This is achived by something called as ‘multi-region primary key’ and ‘multi-region replica’ key.
The multi-region primary key is the main key that would be created in one region. We can then create the replica key which would have same key-id and key materials just like primary key but in a different region.
To create a mulit-region key, just start with normal creation of CMk, however at the first page of setup where we select key type as ‘symmetric’, just select ‘advanced options’ and choose ‘Multi-region key’ instead of the bydefault selected option of Single-region key. Rest of the setup is same.
Now once created, we can select the CMK and select the regionality card, and click on “Create Replica key”/. this will give option of choosing a new region for the replica key
We can manage key policy for this replica key separately.
The following are the shared properties of multi-Region keys.
-
Key ID — (The
Region
element of the key ARN differs.) -
Key material
-
Key material origin
-
Key spec and encryption algorithms
-
Key usage
-
Automatic key rotation — You can enable and disable automatic key rotation only on the primary key. New replica keys are created with all versions of the shared key material. For details, see Rotating multi-Region keys.
Replica key itself cannot be replicated. Rotation cannot be handled by replica key. this all happens from primary key. multi region keys starts from MRK-*.
S3 Encryption
-
Server Side Encryption
-
Client Side Encryption
SSE-S3: This is a server side encryption in which the each object is encrypted with a unique key and hence it is also considered to be one of the strongest block ciphers to encrypt data in AES 256.
SSE-CMK: We can use KMS CMK to ecnrypt and decrypt the data.
SSE-C: IN this the key is provided by the customer while the data is sent to the bucket. So key needs to be provided along with the data in this case.
S3 Same Region Replication
Amazon S3 now supports automatic and asynchronous replication of newly uploaded S3 objects to a destination bucket in the same AWS Region. Amazon S3 Same-Region Replication (SRR) adds a new replication option to Amazon S3, building on S3 Cross-Region Replication (CRR) which replicates data across different AWS Regions
S3 Replication
Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. You can replicate objects to a single destination bucket or to multiple destination buckets. The destination buckets can be in different AWS Regions or within the same Region as the source bucket.
To automatically replicate new objects as they are written to the bucket, use live replication, such as Cross-Region Replication (CRR). To replicate existing objects to a different bucket on demand, use S3 Batch Replication. For more information about replicating existing objects, see When to use S3 Batch Replication.
To enable CRR, you add a replication configuration to your source bucket. The minimum configuration must provide the following:
-
The destination bucket or buckets where you want Amazon S3 to replicate objects
-
An AWS Identity and Access Management (IAM) role that Amazon S3 can assume to replicate objects on your behalf
Along with Sheild, we can use WAF as well in case of DDOS attack:
For application layer attacks, you can use AWS WAF as the primary mitigation. AWS WAF web access control lists (web ACLs) minimize the effects of a DDoS attack at the application layer do the following:
-
Use rate-based rules.
-
Review existing rate-based rules and consider lowering the rate limit threshold to block bad requests.
-
Query the AWS WAF logs to gather specific information of unauthorized activity.
-
Create a geographic match rule to block bad requests originating from a country that isn’t expected for your business.
-
Create an IP set match rule to block bad requests based on IP addresses.
-
Create a regex match rule to block bad requests.
AWS Shield
AWS Shield is a managed service that provides protection against Distributed Denial of Service (DDoS) attacks for applications running on AWS. AWS Shield Standard is automatically enabled to all AWS customers at no additional cost. AWS Shield Advanced is an optional paid service. AWS Shield Advanced provides additional protections against more sophisticated and larger attacks for your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53.
AWS Shield Standard provides protection for all AWS customers against common and most frequently occurring infrastructure (layer 3 and 4) attacks like SYN/UDP floods, reflection attacks, and others to support high availability of your applications on AWS.
AWS Shield Standard automatically protects your web applications running on AWS against the most common, frequently occurring DDoS attacks. You can get the full benefits of AWS Shield Standard by following the best practices of DDoS resiliency on AWS.
AWS Shield Advanced manages mitigation of layer 3 and layer 4 DDoS attacks. This means that your designated applications are protected from attacks like UDP Floods, or TCP SYN floods. In addition, for application layer (layer 7) attacks, AWS Shield Advanced can detect attacks like HTTP floods and DNS floods. You can use AWS WAF to apply your own mitigations, or, if you have Business or Enterprise support, you can engage the 24X7 AWS Shield Response Team (SRT), who can write rules on your behalf to mitigate Layer 7 DDoS attacks.
Benefits of AWS Shield Advanced
-
Advanced real-time metrics and reports: You can always find information about the current status of your DDoS protection and you can also see the real-time report with AWS CloudWatch metrics and attack diagnostics.
-
Cost protection for scaling: This helps you against bill spikes after a DDoS attack that can be created by scaling of your infrastructure in reaction to a DDoS attack.
-
AWS WAF included: Mitigate complex application-layer attacks (layer 7) by setting up rules proactively in AWS WAF to automatically block bad traffic.
-
You get 24×7 access to our DDoS Response Team (DRT) for help and custom mitigation techniques during attacks. To contact the DRT you will need the Enterprise or Business Support levels.
AssumeRoleWithWebIdentity
Calling
AssumeRoleWithWebIdentity
does not require the use of AWS security credentials. Therefore, you can distribute an application (for example, on mobile devices) that requests temporary security credentials without including long-term AWS credentials in the application.Use SignedURLs in case of restricting access to individual files however use Signed Cookies when there are multiple files such as multiple video files of same format.
Load Balancer
ALB : works at HTTP and HTTPS traffic level. This means whatever data is present at layer 7, like Get, Put etc request, we can route traffic based on these headers. Like ALB can route traffic based on user-agent data present in the request.
ALB can do: Path based routing: We can mention specific path for website such as /videos or /games, and ALB will route to different EC2 according to the target group set of the path.
Listener: It is a component of ALB that checks the protocol of the request and then forwards the traffic based on the rule set for the that protocol.
NLB: Works at UDP, TCP and TLS protocol level. Gives more performance that ALB. We cannot associate a security group to NLB. NLB works at network layer. It selects target based on flow hash algorithm.
ELB Access Logs
This gives us information about what requests were being sent to our loadbalancer. This would help us setup WAF rules based on requests hitting our LB. It is off bydefault, so just edit the attribute and check the access logs option to enable access logs. These logs go into S3. The access to the S3 bucket must be given to the AWS owned aws account (since ALB node does not reside in our own account). This AWS account is mentioend in the AWS docs.
Glacier vaults
-
In glacier, the data is stored as archives.
-
Vault is a way through which the archives are grouped together in Glacier
-
We can manage the access to this vault using IAM policies
-
Resource based policy can also be enabled for vault.
-
Vault lock allows us to easily manage and enforce compliance controls for vaults with vault lock policy.
-
We can specify controls such as “Write Once read Many” (WORM) in vault policy and lock the policy from future edits
-
Vault policies are immutable so a policy once created cant be changed.
-
Creating vault lock policy gives a lock ID.
DynamoDB Encryption
-
AWS KMS can be used for server side encryption for encrypting data in DynamDB. This would be encryption at rest. We can either choose AWS-managed KMS key or choose our own CMK.
-
We can also choose to do client side encryption which means first encrypt the data and then send.
-
For Client Side encryption, DynamoDB provides a library on git called DynamoDB encryption Client.
Secrets Manager
-
We can use secrets manager to store text/ keys or strings in an encrypted manner.
-
Secrets manager can work with RDS database to store its creds and can be used by other databases as well.
-
We can enable automatic rotation for our key or choose manual rotation.
-
We also get code to use it in our SDK and retrieve the secrets.
Secrets Manager integration with RDS
-
SM integrates with MySQL, PostgreSQL and Aurora on RDS very well to store the credentials and we can use autorotation with it.
-
When we created a secret for RDS, it actually creates a lambda which would be going and changing the creds for rds when rotation is needed. So if the RDS is inside VPC, make sure to allow inbound for that lambda. Or have the lambda deployed in the same VPC.
DNS Cache Posioning
In this kind of attack, wrong info is fed into the DNS cache of a user so that they may get redirected to wrong website. So if we wanted to go to google.com, cache poisioning will input a malicious IP which would redirect to malicious server. Ex : ettercap in kali.
DNSSEC
-
DNSSEC makes use of asymmetric keys via it uses private and public key.
-
DNSSec creates a secure domain name system by adding cryptographic signatures to existing encrypted is provided with a signature. This creates another file sign.txt and when we use verify, the verify command will let us know if the message was tempered with.
-
Verify similar to this DNSSEC works in which the website is verified. DNSSEC also uses a key which the servers use to verify if the server is actually the real server and then the website loads for the client.
-
It does include more steps and hence more computational power is needed.
-
We can enable DNSSEC in Route53 for our domain. we need to enable “DNSSEC singing”. this create KSK (key-signing key).
-
When we request a website, we get back website IP and the website signature.
Exam Important Points
-
Dealing with exposed access keys
-
Determine the access associate to the key
-
Deactivate it
-
Add explicit deny all policy
-
review logs for see possible backdoor
-
-
Dealing with compromised EC2 instace
-
Lock Down the Ec2 instance security group
-
take EBS snapshot
-
Take a memory dump
-
Perform the forensics analysis
-
-
GuardDuty
-
Whitelist EIP of Ec2 if we are doing pen testing
-
It uses VPC Flow logs, Cloudtrail logs and DNS logs
-
It can generate alerts is Cloudtrail gets disabled.
-
-
We can now use “PreAuthorized scanning engine” for pen testing aws env without asking for permissions from AWS like used to be done earlier.
-
Fields of VPC Flow logs:
-
version : flow log version
-
account id
-
network interface id
-
source address
-
destaddr: destination address
-
source port
-
dest port
-
protocol number
-
number of packets transferred
-
number if bytes transferred
-
start time in seconds
-
end time in seconds
-
action : accept vs reject
-
logging status of flow log
-
-
AWS Inspector scans target based on various baselines:
-
Common Vul and Exporsure ie CVE
-
CIS bechmarks
-
Security Best Practices
-
Network Reachability
-
-
Systemes Manager:
-
Run command: to run commands on Ec2
-
patch complaince: allows us to check compliance status for theec2 with respect to patching activity
-
Patch baseline: This determine what patches are needed to be installed in Ec2.we can also define the approval for the same
-
maintainance window
-
parameter store
-
-
AWS Config use cases
-
Can be used to audit iAM policy assigned to uses before and after specific event
-
detect is cloudtrail is disabled
-
verify ec2 has approved ami only
-
detect open to world security groups
-
detect is ebs volumes are encrypted
-
It can work with lambda for automated remediations
-
-
AWS AThena
-
This doesnt require additional infra to query logs in s3
-
-
AWS WAF
-
Can be attached to ALB, CLoudFront Distribution and API gateway
-
Blocks layer 7 attacks
-
Can also be used to block user-agent headers
-
-
CloudWatch Logs
-
Steps to setup and troubleshoot logs are imp
-
First assign appropriate role to Ec2, then install the cloud Agent and finally configure the agent to log correctly.
-
Verify if awslogs agent is running
-
CloudWatch metric filter can be used to get alerts
-
-
IP Packet inspection
-
If we want to inspect the IP packets for anomalies, we have to create a proxy server and route all the traffic from VPC through that server.
-
install appropriate agent on the hosts and examine at the host level
-
-
AWS Cloudtrail
-
AWS Macie: Can be used to find PII data in s3 bucket
-
AWS Secuirty hub: help with compliance management
-
Services that would help us in case of DDOS attack:
-
WAF
-
Autoscaling
-
AWS Sheild (layer 3, layer 4 and layer 7)
-
R53
-
CloudWatch
-
-
EC2 key pair
-
If we create ami of ec2 and copy it in a different region, and launch an Ec2 from that ami, the new ec2 will still have older .public key in the authorized_keys file. We can use the older private key to use to ssh in this new Ec2
-
-
EBS secure Data wiping
-
AWS wipes the data when provisioning it for new customer
-
before deleting ebs, we can also wipe data manually
-
-
Cloudfront
-
We can use OAI so that only the requried customer can view the content
-
We can use SignedURLS for
-
RTMP distribution, this is not supported for cookies
-
-
We can use signed cookies when we have multiple files in same format requiring restricting access. hence used in multiple files.
-
CFN uses custom TLS certificate.
-
-
VPC Endpoints
-
gateway endpoint
-
interface endpoint
-
-
NACLs
-
Statless firewall offering
-
max 40 rules can be applied
-
-
AWS SES
-
used for emails
-
Can be encrypted
-
-
Host Based IDS
-
This can be installed manually on Ec2 for “file integrety monitoring”.
-
Can be integrated with CloudWatch for alerting
-
-
IPS
-
for intrution prevention system, this can be installed on Ec2 to scan traffic and send data to central ec2
-
-
ADFS
-
Active Directory federation Service (ADFS) is a SSo solution by microsoft.
-
Supports SAMl
-
Imp: AD groups are associated to the IAM roles. AD GROUP ——-> IAM ROLE
-
-
Cognito
-
Provides authentication, authorization and user management for web and app
-
good choice for mobile application auth needs
-
Catch word: Social Media website
-
-
KMS
-
If we have accidentally deleted the imported key material, we can download the new wrapping key and import token and import the original key into existing CMK
-
Encrypt APi can only encrypt data upto 4KB.
-
GenerateDataKeyWithoutPlaintext: Returns a unique symmetric data key for use outside of AWS KMS. This operation returns a data key that is encrypted under a symmetric encryption KMS key that you specify. The bytes in the key are random; they are not related to the caller or to the KMS key.
GenerateDataKeyWithoutPlaintext
is identical to the GenerateDataKey operation except that it does not return a plaintext copy of the data key.This operation is useful for systems that need to encrypt data at some point, but not immediately. When you need to encrypt the data, you call the Decrypt operation on the encrypted copy of the key.It’s also useful in distributed systems with different levels of trust. For example, you might store encrypted data in containers. One component of your system creates new containers and stores an encrypted data key with each container. Then, a different component puts the data into the containers. That component first decrypts the data key, uses the plaintext data key to encrypt data, puts the encrypted data into the container, and then destroys the plaintext data key. In this system, the component that creates the containers never sees the plaintext data key. -
To use the an expiring CMK without changing neither the CMK nor the key material, we will have to re-encrypt the CMK and import the same key materials again into the CMK, this gives new expiration dates.
-
Import token while importing CMK key material is only valid for 24 hours
-
Digital signing with the new asymmetric keys feature of AWS KMS
-
-
-
S3
-
To restrict access to the bucket based on region, StringLike : {“S3: LocationContraint”: “us-west-2”}
-
To Learn:
-
KMS Key imports concepts
-
Credential report contents
-
key rotation enable/disable and frequency
-
SCP should not be applied to root account
-
How and which service does Document Signing
Kinesis Client Library (KCL): requires DynamoDB and CloudWatch services and permissions
Kinesis:
Basic monitoring: Sends stream level data to CW
Enhanced: Sends shard level data
KMS
Encrypt api only encrypts upto 4KB of data. You can use this operation to encrypt small amounts of arbitrary data, such as a personal identifier or database password, or other sensitive information. You don’t need to use the
Encrypt
operation to encrypt a data key. The GenerateDataKey and GenerateDataKeyPair operations return a plaintext data key and an encrypted copy of that data key.Rotating AWS KMS keys
-
Reusing same KMS key is not encouraged and hence either create a new KMS key for your application or just rotate the existing key materials of the key.
-
If we enable ‘Automatic Key Rotation’, in that case AWS would rotate the keys every year automatically.
-
Also, the previous/older key material is still stored in KMS so that the data that was encrypted using previous key material can still be decrypted. The previous key material is only deleted when the KMS key itself is deleted.
-
We can track the key rotation on AWS CloudWatch And Cloudtrail logs.
-
When a key material is rotated, if we ask the Key to now decrypt the data, the decryption would happen automatically by the older key material which was used to encrypt the data. There is no need to specify which key material should be used, KMS transparently does this for us without any additional steps or code changes. However, the newer encryptions would use new key materials for encryption and decryption.
-
i.e. When you use a rotated KMS key to encrypt data, AWS KMS uses the current key material. When you use the rotated KMS key to decrypt ciphertext, AWS KMS uses the version of the key material that was used to encrypt it. You cannot request a particular version of the key material. Because AWS KMS transparently decrypts with the appropriate key material, you can safely use a rotated KMS key in applications and AWS services without code changes.
-
However, automatic key rotation has no effect on the data that the KMS key protects. It does not rotate the data keys that the KMS key generated or re-encrypt any data protected by the KMS key, and it will not mitigate the effect of a compromised data key.
-
Rotation of the keys managed by AWS is automatically done by AWS and it cannot be enabled/disbaled by us.
-
In May 2022, AWS KMS changed the rotation schedule for AWS managed keys from every three years (approximately 1,095 days) to every year (approximately 365 days). Ref: https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
-
New AWS managed keys are automatically rotated one year after they are created, and approximately every year thereafter.
-
Existing AWS managed keys are automatically rotated one year after their most recent rotation, and every year thereafter.
-
You do not need to change applications or aliases that refer to the key ID or key ARN of the KMS key.
-
Rotating key material does not affect the use of the KMS key in any AWS service.
-
After you enable key rotation, AWS KMS rotates the KMS key automatically every year. You don’t need to remember or schedule the update
-
You might decide to create a new KMS key and use it in place of the original KMS key. This has the same effect as rotating the key material in an existing KMS key, so it’s often thought of as manually rotating the key.
-
Manual rotation is a good choice when you want to control the key rotation schedule. It also provides a way to rotate KMS keys that are not eligible for automatic key rotation, including:
-
KMS keys in custom key stores
-
KMS keys with imported key material
-
KMS CMK: Automatic key rotation is disabled by default on customer managed keys but authorized users can enable and disable it. When you enable (or re-enable) automatic key rotation, AWS KMS automatically rotates the KMS key one year (approximately 365 days) after the enable date and every year thereafter.
-
Multi-Region Keys: You can enable and disable automatic key rotation for multi-Region keys. You set the property only on the primary key. When AWS KMS synchronizes the keys, it copies the property setting from the primary key to its replica keys. When the key material of the primary key is rotated, AWS KMS automatically copies that key material to all of its replica keys. For details, see Rotating multi-Region keys.
-
While a KMS key is disabled, AWS KMS does not rotate it. However, the key rotation status does not change, and you cannot change it while the KMS key is disabled. When the KMS key is re-enabled, if the key material is more than one year old, AWS KMS rotates it immediately and every year thereafter. If the key material is less than one year old, AWS KMS resumes the original key rotation schedule.
-
While a KMS key is pending deletion, AWS KMS does not rotate it. The key rotation status is set to
false
and you cannot change it while deletion is pending. If deletion is canceled, the previous key rotation status is restored. If the key material is more than one year old, AWS KMS rotates it immediately and every year thereafter. If the key material is less than one year old, AWS KMS resumes the original key rotation schedule.
Rotating keys manually
-
Manual rotation means creating a new KMS key which would have new key materials. Ie You might want to create a new KMS key and use it in place of a current KMS key instead of enabling automatic key rotation. When the new KMS key has different cryptographic material than the current KMS key, using the new KMS key has the same effect as changing the key material in an existing KMS key. The process of replacing one KMS key with another is known as manual key rotation.
-
Manual rotation gives us power to manage when the key materials should be rotated. Because we can anytime create a new key.
-
However, when we do this manual rotation of the key, we need to update references to the KMS key ID or Key ARN in our application.
-
Aliases, which associate a friendly name with a KMS key, can make this process easier. Use an alias to refer to a KMS key in your applications. Then, when you want to change the KMS key that the application uses, instead of editing your application code, change the target KMS key of the alias.
-
When calling the Decrypt operation on manually rotated symmetric encryption KMS keys, omit the
KeyId
parameter from the command. AWS KMS automatically uses the KMS key that encrypted the ciphertext. -
Use Key Alias is like the key name which we can mention in our code. Now if we did manual rotation, the KEY arn would have changed, due to which in our codes we’ll have to update the Key ARN. This is why key alias is used in codes, because Alias of the older key can be given to the newly created key. Its like naming a file with the same name so that code would pick the new file. For this, ‘update-alias’ command is used.
Importing KMS Key
-
We can create a KMS key without any key materials and then choose to import our own key materials into the KMS key. This feature is called as ‘bring your own key (BYOK)’.
-
Create a symmetric encryption KMS key with no key material – The key spec must be
SYMMETRIC_DEFAULT
and the origin must beEXTERNAL
-
Imported key material is supported only for symmetric encryption KMS keys in AWS KMS key stores, including multi-Region symmetric encryption KMS keys. It is not supported on:
-
KMS keys in custom key stores
-
We cannot do automatic key rotation for imported key materials. For this we’ll have to
-
You must reimport the same key material that was originally imported into the KMS key. You cannot import different key material into a KMS key. Also, AWS KMS cannot create key material for a KMS key that is created without key material.
Credential Report Contents:
-
user
-
arn
-
user_creation_time
-
password_enabled
-
password_last_used
-
password_last_changed
-
password_next_rotation
-
mfa_active
-
access_key_1_active
-
access_key_1_last_rotated
-
access_key_1_last_used_date
-
access_key_1_last_used_region
-
access_key_1_last_used_service
-
access_key_2_active
-
…..
-
cert_1_active