Tuesday, June 7, 2016

01_AWS - Compute

Amazon Web Services

 

2006: Amazon launched Amazon Web Service (AWS) on a utility computing basis although the initial released dated back to July 2002.

Amazon Web Services (AWS) is a collection of remote computing services (also called web services) that together make up a cloud computing platform, offered over the Internet by Amazon.com.

The most central and well-known of these services are Amazon EC2 (Elastic Compute Cloud )and Amazon S3 (Simple Storage Service).

 

Book:

Amazon Web Services is based on SOA standards, including HTTP, REST, and SOAP transfer protocols, open source and commercial operating systems, application servers, and browser-based access.

 

Topics:

 

1.       Amazon EC2

2.       Amazon EC2 Container Registry

3.       Amazon EC2 Container Service

4.       AWS Elastic Beanstalk

5.       AWS Lambda

6.       Auto Scaling

7.       Elastic Load Balancing

8.       Amazon VPC (Virtual Private Cloud)

 

https://www.youtube.com/watch?v=lioD902fOOQ

 

1). Amazon EC2 -  Virtual Servers in the Cloud

 

·         Amazon EC2 is a Web Service that provides Scalable Computing Capacity in the Amazon Web Services (AWS) cloud.

·         Amazon EC2 is a web service that provides Resizable Computing Capacity—literally, servers in Amazon's data centers—that you use to build and host your software systems.

·         Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster.

·         You can use Amazon EC2 to Launch Virtual Servers as you need, configure security and networking, and manage storage.

·         Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.

 

 

Features:

 

·         Instances  -  Virtual computing environments, known as instances

·         Amazon Machine Images (AMIs) - Preconfigured templates for your instances, known as AMIs, that package the bits you need for your server (including the operating system and additional software)

·         Instance Types - Various configurations of CPU, Memory, Storage, and Networking Capacity for your instances, known as Instance Types

·         Secure login information for your instances using key pairs (AWS stores the public key, and you store the private key in a secure place)

·         Instance Store Volumes - Storage volumes for temporary data that's deleted when you stop or terminate your instance, known as Instance Store Volumes

·         Amazon EBS volumes - Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS), known as Amazon EBS volumes

·         Regions And Availability Zones - Multiple physical locations for your resources, such as Instances and Amazon EBS volumes, known as regions and Availability Zones

·         A firewall that enables you to specify the protocols, ports, and source IP ranges that can reach your instances using security groups

·         Elastic IP addresses -  Static IP addresses for dynamic cloud computing, known as Elastic IP addresses

·         Metadata, known as tags, that you can create and assign to your Amazon EC2 resources

·         Virtual Private Clouds (VPCs) -  Virtual networks you can create that are logically isolated from the rest of the AWS cloud, and that you can optionally connect to your own network, known as virtual private clouds (VPCs)

·         AWS Management Console -

 

Accessing Amazon EC2:

 

UI – Amazon EC2 Console.

 

Command Line:

o    AWS Command Line Interface (CLI)

o    Amazon EC2 Command Line Interface (CLI) Tools

o    AWS Tools for Windows PowerShell

 

 

2). Amazon EC2 Container Registry (ECR)

 

·         Amazon EC2 Container Registry (Amazon ECR) is a fully managed Docker Container Registry that makes it easy for developers to store, manage, and deploy Docker Container Images.

·         Amazon ECR is a managed AWS Docker registry service that is secure, scalable, and reliable .

·         Amazon ECR supports private Docker repositories with resource-based permissions using AWS IAM so that specific users or Amazon EC2 instances can access repositories and images.

·         Developers can use the Docker CLI to push, pull, and manage images.

 

 

Components

·         Registry - An Amazon ECR registry is provided to each AWS account; you can create image repositories in your registry and store images in them.

·         Authorization token –

o    Your Docker client needs to authenticate to Amazon ECR registries as an AWS user before it can push and pull images.

o    The AWS CLI get-login command provides you with authentication credentials to pass to Docker.

 

·         Repository - An Amazon ECR image repository contains your Docker images.

·         Repository policy - You can control access to your repositories and the images within them with repository policies. For more information, see Amazon ECR Repository Policies (p. 17).

·         Image - You can push and pull Docker images to your repositories. You can use these images locally on your development system, or you can use them in Amazon ECS task definitions.

 

 

 

3). Amazon EC2 Container Service (Amazon ECS):

 

·         Amazon ECS is a highly scalable, fast, Container Management Service, that makes it easy to run, stop, and manage Docker containers on a Cluster of Amazon EC2 instances.

·         Amazon ECS lets you

o    Launch and stop container-enabled applications with simple API calls,

o    Allows you to get the state of your cluster from a centralized service, and

o    Gives you access to many familiar Amazon EC2 features.

o    Schedule the placement of containers across your cluster based on your resource needs, isolation policies, and availability requirements.

o    Eliminates the need for you to operate your own cluster management and configuration management systems or worry about scaling your management infrastructure.

 

Components:

 

·         Container instance - An Amazon EC2 instance that is running the Amazon ECS agent and has been registered into a cluster.

·         Cluster - A logical grouping of container instances that you can place tasks on.

·         Task definition - A description of an application that contains one or more container definitions.

·         Scheduler - The method used for placing tasks on container instances.

·         Service - An Amazon ECS service allows you to run and maintain a specified number of instances of a task definition simultaneously.

·         Task - An instantiation of a task definition that is running on a container instance.

 

 

4). AWS Elastic Beanstalk

 

AWS comprises dozens of services, each of which exposes an area of functionality.  While the variety of services offers flexibility for how you want to manage your AWS infrastructure, it can be challenging to figure out which services to use and how to provision them.

 

·         With Elastic Beanstalk, you can quickly Deploy And Manage Applications in the AWS cloud without worrying about the infrastructure that runs those applications.

·         AWS Elastic Beanstalk reduces management complexity without restricting choice or control.

·         You simply upload your application, and Elastic Beanstalk automatically handles the details of Capacity Provisioning, Load Balancing, Scaling, And Application Health Monitoring.

·         Elastic Beanstalk provides developers and systems administrators an easy, fast way to deploy and manage their applications without having to worry about AWS infrastructure.

 

You can also perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface.

After you create and deploy your application, information about the application—including metrics, events, and environment status—is available through the AWS Management Console, APIs, or Command Line Interfaces, including the unified AWS CLI.

 

 

 

5). AWS Lambda:

 

·         AWS Lambda is a Compute Service where you can upload your code to AWS Lambda and the service can run the code on your behalf using AWS infrastructure.

·         After you upload your code and create what we call a Lambda function, AWS Lambda takes care of provisioning and managing the servers that you use to run the code

·         AWS Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging.

·         All you need to do is supply your code in one of the languages that AWS Lambda supports (currently Node.js, Java, and Python).

 

·         AWS Lambda is an ideal Compute Platform for many application scenarios, provided that you can write your application code in languages supported by AWS Lambda (that is, Node.js, Java, and Python), and run within the AWS Lambda standard runtime environment and resources provided by Lambda.

 

·         AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second.

·         With these capabilities, you can use Lambda to easily build data processing triggers for AWS services like

o    Amazon S3 and

o    Amazon DynamoDB,

o    Amazon Kinesis or

o    Create your own back end that operates at AWS scale, performance, and security

 

6). Auto Scaling

 

·         Auto Scaling is a web service designed to Launch Or Terminate Amazon EC2 instances automatically based on user-defined policies, schedules, and health checks. 

 

·         You create collections of EC2 instances, called Auto Scaling Groups.

·         Auto Scaling helps you ensure that you have the correct number of EC2 instances available to handle the load for your application.

·         You can specify the minimum/ maximum number of instances in each Auto Scaling group, and Auto Scaling ensures that your group never goes below/above this size.

·         If you specify the desired capacity, either when you create the group or at any time thereafter, Auto Scaling ensures that your group has this many instances.

·         If you specify scaling policies, then Auto Scaling can launch or terminate instances as demand on your application increases or decreases.

 

 

Components:

 

Groups –

Your EC2 instances are organized into groups so that they can be treated as a logical unit for the purposes of scaling and management.

When you create a group, you can specify its minimum, maximum, and, desired number of EC2 instances

 

Launch configurations

Your group uses a launch configuration as a template for its EC2 instances.

When you create a launch configuration, you can specify information such as the AMI ID, instance type, key pair, security groups, and block device mapping for your instances.

 

Scaling plans

A scaling plan tells Auto Scaling when and how to scale.

For example, you can base a scaling plan on the occurrence of specified conditions (dynamic scaling) or on a schedule

 

Accessing Auto Scaling:

 

UI – Amazon EC2 Console.

 

Command Line:

o    AWS Command Line Interface (CLI)

o    AWS Tools for Windows PowerShell

 

 

 

7). Elastic Load Balancing:

 

·         Elastic Load Balancing automatically distributes your incoming application traffic across multiple Amazon EC2 instances.

·         You can add and remove EC2 instances from your load balancer as your needs change, without disrupting the overall flow of information.

·         It detects unhealthy instances and reroutes traffic to healthy instances until the unhealthy instances have been restored.

·         If a failed EC2 instance is restored, Elastic Load Balancing restores the traffic to that instance.

·         Elastic Load Balancing can also serve as the first line of defense against attacks on your network.

·         You can offload the work of encryption and decryption to your load balancer so that your EC2 instances can focus on their main work.

 

Features:

 

·         Configure the load balancer to accept traffic using the following protocols: HTTP, HTTPS (secure HTTP), TCP, and SSL (secure TCP).

·         Configure your EC2 instances to accept traffic only from your load balancer. You can use the operating systems and instance types supported by Amazon EC2

·         Configure your load balancer to distribute requests to EC2 instances in multiple Availability Zones, minimizing the risk of overloading one single instance. If an entire Availability Zone goes offline, the load balancer routes traffic to instances in other Availability Zones.

·         No limit on the number of connections that your load balancer can attempt to make with your EC2 instances. The number of connections scales with the number of concurrent requests that the load balancer receives.

·         Configure the health checks that Elastic Load Balancing uses to monitor the health of the EC2 instances registered with the load balancer so that it can send requests only to the healthy instances.

·         You can use end-to-end traffic encryption on those networks that use secure (HTTPS/SSL) connections.

·         [EC2-VPC]  -  You can create an Internet-Facing Load Balancer, which takes requests from clients over the Internet and routes them to your EC2 instances, or an internal-facing load balancer, which takes requests from clients in your VPC and routes them to EC2 instances in your private subnets.  Load balancers in EC2-Classic are always Internet-Facing.

·         [EC2-Classic] - Load balancers for EC2-Classic support both IPv4 and IPv6 addresses. Load balancers for a VPC do not support IPv6 addresses.

·         You can monitor your load balancer using CloudWatch metrics, access logs, and AWS CloudTrail.

·         You can associate your Internet-facing load balancer with your domain name. Because the load balancer receives all requests from clients, you don't need to create and manage public domain names for the EC2 instances to which the load balancer routes traffic. You can point the instance's domain records at the load balancer instead and scale as needed (either adding or removing capacity) without having to update the records with each scaling activity.

 

Accessing ELB:

 

·         AWS Management Console— Provides a web interface that you can use to access Elastic Load Balancing.

·         AWS Command Line Interface (CLI) — Provides commands for a broad set of AWS services, including Elastic Load Balancing, and is supported on Windows, Mac, and Linux.

·         AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and error handling.

·         Query API— Provides low-level APIs that you call using HTTPS requests.

·         SOAP API— Provides access to the Elastic Load Balancing web service using the SOAP web services messaging protocol.

·         ELB CLI — Provides commands to access Elastic Load Balancing. However recommended is to use AWS CLI

 

 

8). Amazon VPC:

 

·         Amazon VPC enables you to launch AWS resources into a virtual network that you've defined.

·         This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

 

·         A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS cloud.

·         You can launch your AWS resources, such as Amazon EC2 instances, into your VPC.

·         You can configure your VPC; you can select its IP address range, create subnets, and configure route tables, network gateways, and security settings.

 

Regards,

Arun Manglick

02_AWS - Storage & Content Delivery

Amazon Web Services

 

2006: Amazon launched Amazon Web Service (AWS) on a utility computing basis although the initial released dated back to July 2002.

Amazon Web Services (AWS) is a collection of remote computing services (also called web services) that together make up a cloud computing platform, offered over the Internet by Amazon.com.

The most central and well-known of these services are Amazon EC2 (Elastic Compute Cloud )and Amazon S3 (Simple Storage Service).

 

Book:

Amazon Web Services is based on SOA standards, including HTTP, REST, and SOAP transfer protocols, open source and commercial operating systems, application servers, and browser-based access.

 

Topics:

 

1.       Amazon S3

2.       Amazon CloudFront

3.       Amazon EBS

4.       Amazon EFS (preview)

5.       Amazon Glacier

6.       AWS Import/Export

7.       AWS Storage Gateway

 

 

1). Amazon S3

 

·         Amazon S3 is storage for the Internet.

·         Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data at any time, from anywhere on the web.

·         You can accomplish these tasks using the simple and intuitive web interface of the AWS Management Console.

 

 

S3 Concepts:

 

Buckets

·         Amazon S3 stores data as objects within buckets. An object consists of a file and optionally any metadata that describes that file.

·         Every object is contained in a bucket. For example, if the object named photos/puppy.jpg is stored in the johnsmith bucket, then it is addressable using the URL http://johnsmith.s3.amazonaws.com/photos/puppy.jpg

·         When you upload a file, you can set permissions on the object as well as any metadata.

·         Buckets are the containers for objects. You can have one or more buckets.

 

You can configure buckets so that they are created in a specific Region.

You can also configure a bucket so that every time an object is added to it, Amazon S3 generates a Unique Version ID and assigns it to the object

 

 

Objects

·         Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata.

·         The data portion is opaque to Amazon S3. The metadata is a set of name-value pairs that describe the object.

·         These include some default metadata, such as the date last modified, and standard HTTP metadata, such as Content-Type.

·         You can also specify custom metadata at the time the object is stored.

·         An object is uniquely identified within a bucket by a key (Name) and a Version ID.

 

 

Keys

·         A key is the unique identifier for an object within a bucket. Every object in a bucket has exactly one key.

·         Because the combination of a bucket, key, and version ID uniquely identify each object, Amazon S3 can

·         be thought of as a basic data map between "bucket + key + version" and the object itself.

·         Every object in Amazon S3 can be uniquely addressed through the combination of the web service endpoint, bucket name, key, and optionally, a version

 

 

Regions

·         You can choose the geographical region where Amazon S3 will store the buckets you create.

·         You might choose a region to optimize latency, minimize costs, or address regulatory requirements

·         Objects stored in a region never leave the region unless you explicitly transfer them to another region

 

 

 

2). Amazon CloudFront

 

·         Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, for example, .html, .css, .php, image, and media files, to end users.

·         CloudFront delivers your content through a worldwide network of Edge Locations.

·         When an end user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency, so content is delivered with the best possible performance.

·         If the content is already in that edge location, CloudFront delivers it immediately. If the content is not currently in that edge location, CloudFront retrieves it from an Amazon S3 bucket or an HTTP server (for example, a web server) that you have identified as the source for the definitive version of your content.

 

 

3). Amazon Elastic Block Store (Amazon EBS)

 

·         Amazon EBS provides Block Level Storage Volumes for use with EC2 instances.

·         EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone.

·         EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance.

·         With Amazon EBS, you pay only for what you use.

 

·         Amazon EBS is recommended when data changes frequently and requires long-term persistence.

·         EBS volumes are particularly well-suited for use as the primary storage for file systems, databases, or for any applications that require fine granular updates and access to raw, unformatted, block-level storage. Amazon EBS is particularly helpful for database-style applications that frequently encounter many random reads and writes across the data set.

·         For simplified data encryption, you can launch your EBS volumes as encrypted volumes. Amazon EBS encryption offers you a simple encryption solution for your EBS volumes without the need for you to build, manage, and secure your own key management infrastructure. When you create an encrypted EBS volume and attach it to a supported instance type, data stored at rest on the volume, disk I/O, and snapshots created from the volume are all encrypted. The encryption occurs on the servers that hosts EC2 instances, providing encryption of data-in-transit from EC2 instances to EBS storage.

 

·         Amazon EBS encryption uses AWS Key Management Service (AWS KMS) master keys when creating encrypted volumes and any snapshots created from your encrypted volumes. The first time you create an encrypted EBS volume in a region, a default master key is created for you automatically. This key is used for Amazon EBS encryption unless you select a Customer Master Key (CMK) that you created separately using the AWS Key Management Service. Creating your own CMK gives you more flexibility, including the ability to create, rotate, disable, define access controls, and audit the encryption keys used to protect your data.

 

·         You can attach multiple volumes to the same instance within the limits specified by your AWS account. Your account has a limit on the number of EBS volumes that you can use, and the total storage available to you.

 

 

4). Amazon Elastic File System (Amazon EFS)

 

·         Amazon EFS provides File Storage for your EC2 instances.

·         With Amazon EFS, you can create a file system, mount the file system on your EC2 instances, and then read and write data from your EC2 instances to and from your file system.

·         With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it.

·         Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, providing a common data source for workloads and applications running on more than one instance.

·         The service is designed to be highly scalable. Amazon EFS file systems can grow to petabyte scale, drive high levels of throughput, and support thousands of concurrent NFS connections

·         Amazon EFS stores data and metadata across multiple Availability Zones in a region, providing high availability and durability.

·         Amazon EFS provides read-after-write consistency.

·         Amazon EFS is SSD-based and is designed to deliver low latencies for file operations. In addition, the service is designed to provide high-throughput read and write operations, and can support highly parallel workloads, efficiently handling parallel operations on the same file system from many different instances.

 

Note: Amazon EFS supports the NFSv4.0 protocol. The native Microsoft Windows Server 2012 and Microsoft Windows Server 2008 NFS client supports NFSv2.0 and NFSv3.0

 

 

 

 

5). Amazon Glacier

 

·         Amazon Glacier is a storage service optimized for infrequently used data, or "Cold Data."  (If your application requires fast or frequent access to your data, consider using Amazon S3)

·         The service provides durable and extremely low-cost storage with security features for data archiving and backup.

·         With Amazon Glacier, you can store your data cost effectively for months, years, or even decades.

·         Amazon Glacier enables you to offload the administrative burdens of operating and scaling storage to AWS, so you don't have to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and recovery, or time-consuming hardware migrations.

 

·         The Amazon Glacier data model core concepts include Vaults and Archives.

·         Amazon Glacier is a REST-based web service. In terms of REST, vaults and archives are the resources.

·         In addition, the Amazon Glacier data model includes Job and Notification-Configuration resources.  These resources complement the core resources.

 

 

5). AWS Import/Export

 

·         AWS Import/Export is a service that accelerates transferring large amounts of data into and out of AWS using physical storage appliances, bypassing the Internet.

·         AWS Import/Export consists of

o    AWS Import/Export Snowball (Snowball), which uses on demand, Amazon-provided secure storage appliances to physically transport terabytes to many petabytes of data, and

o    AWS Import/Export Disk, which utilizes customer-provided portable devices to transfer smaller datasets.

 

·         AWS transfers data directly onto and off of your storage devices using Amazon’s high-speed internal network.

·         Your data load typically begins the next business day after your storage device arrives at AWS. After the data export or import completes, we return your storage device.

·         For large data sets, AWS Import/Export can be significantly faster than Internet transfer and more cost effective than upgrading your connectivity.

 

·         AWS Import/Export supports:

o    Import/Export to/from Amazon S3

o    Import to Amazon EBS

o    Import to Amazon Glacier

 

 

6). AWS Storage Gateway

 

·         AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration (with data security features) between your on-premises IT environment and the Amazon Web Services (AWS) storage infrastructure.

·         You can use the service to store data in the AWS cloud for scalable and cost-effective storage that helps maintain data security.

·         AWS Storage Gateway offers both Volume-Based and Tape-Based storage solutions

 

 

·         You may choose to run AWS Storage Gateway either

o    On-premises as a virtual machine (VM) appliance, or

o    In AWS, as an EC2 instance.

 

·         You deploy your gateway on an EC2 instance to provision iSCSI storage volumes in AWS.

·         Gateways hosted on EC2 instances can be used for disaster recovery, data mirroring, and providing storage for applications hosted on Amazon EC2.

 

 

 

Regards,

Arun Manglick

 

 

 

 

 

 

 

 

 

 

 


本メールおよび添付ファイルは、機密情報を含んでいる可能性があります。そのため、宛先人以外の方による利用は認められておりません。宛先人以外の方による本メールの公表・複写・転用等は厳禁であり、違法となることがあります。万が一、本メールを宛先人以外の方が受信された場合は、お手数ですが、直ちに発信人にお知らせいただくとともに、本メールを削除するようお願いいたします。
The contents of this email and any attachments may include confidential information. Therefore, they may not be disclosed to, used by, or copied in any way by anyone other than the intended recipient and any such disclosure, use or copy can be treated as illegal. In the case this email is sent to you in error, please inform the sender and delete this email.