Jump to Navigation

Cloud

Amazon GameLift FleetIQ and Spot Instances – Save up to 90% On Game Server Hosting

AWS Blog - Thu, 02/22/2018 - 10:21

Amazon GameLift is a scalable, cloud-based runtime environment for session-based multiplayer games. You simply upload a build of your game, tell Amazon GameLift which type of EC2 instances you’d like to host it on, and sit back while Amazon GameLift takes care of setting up sessions and maintaining a suitably-sized fleet of EC2 instances. This automatic scaling allows you to accommodate demand that varies over time without having to keep compute resources in reserve during quiet periods.

Use Spot Instances
Last week we added a new feature to further decrease your per-player, per-hour costs when you host your game on Amazon GameLift. Before that launch, Amazon GameLift instances were always launched in On-Demand form. Instances of this type are always billed at fixed prices, as detailed on the Amazon GameLift Pricing page.

You can now make use Amazon GameLift Spot Instances in your GameLift fleets. These instances represent unused capacity and have prices that rise and fall over time. While your results will vary, you may see savings of up to 90% when compared to On-Demand Instances.

While you can use Spot Instances as a simple money-saving tool, there are other interesting use cases as well. Every game has a life cycle, along with a cadre of loyal players who want to keep on playing until you finally unplug and decommission the servers. You could create an Amazon GameLift fleet comprised of low-cost Spot Instances and keep that beloved game up and running as long as possible without breaking the bank. Behind the scenes, an Amazon GameLift Queue will make use of both Spot and On-Demand Instances, balancing price and availability in an attempt to give you the best possible service at the lowest price.

As I mentioned earlier, Spot Instances represent capacity that is not in use by On-Demand Instances. When this capacity decreases, existing Spot Instances could be interrupted with two minutes of notification and then terminated. Fortunately, there’s a lot of capacity and terminations are, statistically speaking, quite rare. To reduce the frequency even further, Amazon GameLift Queues now include a new feature that we call FleetIQ.

FleetIQ is powered by historical pricing and termination data for Spot Instances. This data, in combination with a very conservative strategy for choosing instance types, further reduces the odds that any particular game will be notified and then interrupted. The onProcessTerminate callback in your game’s server process will be activated if the underlying Spot Instance is about to be interrupted. At that point you have two minutes to close out the game, save any logs, free up any resources, and otherwise wrap things up. While you are doing this, you can call GetTerminationTime to see how much time remains.

Creating a Fleet
To take advantage of Spot Instances and FleetIQ, you can use the Amazon GameLift console or API to set up Queues with multiple fleets of Spot and On-Demand Instances. By adding more fleets into each Queue, you give FleetIQ more options to improve latency, interruption rate, and cost. To start a new game session on an instance, FleetIQ first selects the region with the lowest latency for each player, then chooses the fleet with the lowest interruption rate and cost.

Let’s walk through the process. I’ll create a fleet of On-Demand Instances and a fleet of Spot Instances, in that order:

And:

I take a quick break while the fleets are validated and activated:

Then I create a queue for my game. I select the fleets as the destinations for the queue:

If I am building a game that will have a global user base, I can create fleets in additional AWS Regions and use a player latency policy so that game sessions will be created in a suitable region:

To learn more about how to use this feature, take a look at the Spot Fleet Integration Guide.

Now Available
You can use Amazon GameLift Spot Instance fleets to host your session-based games now! Take a look, give it a try, and let me know what you think.

If you are planning to attend GDC this year, be sure to swing by booth 1001. Check out our GDC 2018 site for more information on our dev day talks, classroom sessions, and in-booth demos.

Jeff;

 

Categories: Cloud

Now Available – AWS Serverless Application Repository

AWS Blog - Wed, 02/21/2018 - 11:13

Last year I suggested that you Get Ready for the AWS Serverless Application Repository and gave you a sneak peek. The Repository is designed to make it as easy as possible for you to discover, configure, and deploy serverless applications and components on AWS. It is also an ideal venue for AWS partners, enterprise customers, and independent developers to share their serverless creations.

Now Available
After a well-received public preview, the AWS Serverless Application Repository is now generally available and you can start using it today!

As a consumer, you will be able to tap in to a thriving ecosystem of serverless applications and components that will be a perfect complement to your machine learning, image processing, IoT, and general-purpose work. You can configure and consume them as-is, or you can take them apart, add features, and submit pull requests to the author.

As a publisher, you can publish your contribution in the Serverless Application Repository with ease. You simply enter a name and a description, choose some labels to increase discoverability, select an appropriate open source license from a menu, and supply a README to help users get started. Then you enter a link to your existing source code repo, choose a SAM template, and designate a semantic version.

Let’s take a look at both operations…

Consuming a Serverless Application
The Serverless Application Repository is accessible from the Lambda Console. I can page through the existing applications or I can initiate a search:

A search for “todo” returns some interesting results:

I simply click on an application to learn more:

I can configure the application and deploy it right away if I am already familiar with the application:

I can expand each of the sections to learn more. The Permissions section tells me which IAM policies will be used:

And the Template section displays the SAM template that will be used to deploy the application:

I can inspect the template to learn more about the AWS resources that will be created when the template is deployed. I can also use the templates as a learning resource in preparation for creating and publishing my own application.

The License section displays the application’s license:

To deploy todo, I name the application and click Deploy:

Deployment starts immediately and is done within a minute (application deployment time will vary, depending on the number and type of resources to be created):

I can see all of my deployed applications in the Lambda Console:

There’s currently no way for a SAM template to indicate that an API Gateway function returns binary media types, so I set this up by hand and then re-deploy the API:

Following the directions in the Readme, I open the API Gateway Console and find the URL for the app in the API Gateway Dashboard:

I visit the URL and enter some items into my list:

Publishing a Serverless Application
Publishing applications is a breeze! I visit the Serverless App Repository page and click on Publish application to get started:

Then I assign a name to my application, enter my own name, and so forth:

I can choose from a long list of open-source friendly SPDX licenses:

I can create an initial version of my application at this point, or I can do it later. Either way, I simply provide a version number, a URL to a public repository containing my code, and a SAM template:

Available Now
The AWS Serverless Application Repository is available now and you can start using it today, paying only for the AWS resources consumed by the serverless applications that you deploy.

You can deploy applications in the US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), and South America (São Paulo) Regions. You can publish from the US East (N. Virginia) or US East (Ohio) Regions for global availability.

Jeff;

 

Categories: Cloud

Amazon Relational Database Service – Looking Back at 2017

AWS Blog - Mon, 02/12/2018 - 14:46

The Amazon RDS team launched nearly 80 features in 2017. Some of them were covered in this blog, others on the AWS Database Blog, and the rest in What’s New or Forum posts. To wrap up my week, I thought it would be worthwhile to give you an organized recap. So here we go!

Certification & Security

Features

Engine Versions & Features

Regional Support

Instance Support

Price Reductions

And That’s a Wrap
I’m pretty sure that’s everything. As you can see, 2017 was quite the year! I can’t wait to see what the team delivers in 2018.

Jeff;

 

Categories: Cloud

AWS Hot Startups for February 2018: Canva, Figma, InVision

AWS Blog - Mon, 02/12/2018 - 10:08

Note to readers! Starting next month, we will be publishing our monthly Hot Startups blog post on the AWS Startup Blog. Please come check us out.

As visual communication—whether through social media channels like Instagram or white space-heavy product pages—becomes a central part of everyone’s life, accessible design platforms and tools become more and more important in the world of tech. This trend is why we have chosen to spotlight three design-related startups—namely Canva, Figma, and InVision—as our hot startups for the month of February. Please read on to learn more about these design-savvy companies and be sure to check out our full post here.

Canva (Sydney, Australia)

For a long time, creating designs required expensive software, extensive studying, and time spent waiting for feedback from clients or colleagues. With Canva, a graphic design tool that makes creating designs much simpler and accessible, users have the opportunity to design anything and publish anywhere. The platform—which integrates professional design elements, including stock photography, graphic elements, and fonts for users to build designs either entirely from scratch or from thousands of free templates—is available on desktop, iOS, and Android, making it possible to spin up an invitation, poster, or graphic on a smartphone at any time.

To learn more about Canva, read our full interview with CEO Melanie Perkins here.

Figma (San Francisco, CA)

Figma is a cloud-based design platform that empowers designers to communicate and collaborate more effectively. Using recent advancements in WebGL, Figma offers a design tool that doesn’t require users to install any software or special operating systems. It also allows multiple people to work in a file at the same time—a crucial feature.

As the need for new design talent increases, the industry will need plenty of junior designers to keep up with the demand. Figma is prepared to help students by offering their platform for free. Through this, they “hope to give young designers the resources necessary to kick-start their education and eventually, their careers.”

For more about Figma, check out our full interview with CEO Dylan Field here.

InVision (New York, NY)

Founded in 2011 with the goal of helping improve every digital experience in the world, digital product design platform InVision helps users create a streamlined and scalable product design process, build and iterate on prototypes, and collaborate across organizations. The company, which raised a $100 million series E last November, bringing the company’s total funding to $235 million, currently powers the digital product design process at more than 80 percent of the Fortune 100 and brands like Airbnb, HBO, Netflix, and Uber.

Learn more about InVision here.

Be sure to check out our full post on the AWS Startups blog!

-Tina

Categories: Cloud

New – Encryption at Rest for DynamoDB

AWS Blog - Thu, 02/08/2018 - 12:02

At AWS re:Invent 2017, Werner encouraged his audience to “Dance like nobody is watching, and to encrypt like everyone is:

The AWS team is always eager to add features that make it easier for you to protect your sensitive data and to help you to achieve your compliance objectives. For example, in 2017 we launched encryption at rest for SQS and EFS, additional encryption options for S3, and server-side encryption of Kinesis Data Streams.

Today we are giving you another data protection option with the introduction of encryption at rest for Amazon DynamoDB. You simply enable encryption when you create a new table and DynamoDB takes care of the rest. Your data (tables, local secondary indexes, and global secondary indexes) will be encrypted using AES-256 and a service-default AWS Key Management Service (KMS) key. The encryption adds no storage overhead and is completely transparent; you can insert, query, scan, and delete items as before. The team did not observe any changes in latency after enabling encryption and running several different workloads on an encrypted DynamoDB table.

Creating an Encrypted Table
You can create an encrypted table from the AWS Management Console, API (CreateTable), or CLI (create-table). I’ll use the console! I enter the name and set up the primary key as usual:

Before proceeding, I uncheck Use default settings, scroll down to the Encrypytion section, and check Enable encryption. Then I click Create and my table is created in encrypted form:

I can see the encryption setting for the table at a glance:

When my compliance team asks me to show them how DynamoDB uses the key to encrypt the data, I can create a AWS CloudTrail trail, insert an item, and then scan the table to see the calls to the AWS KMS API. Here’s an extract from the trail:

{ "eventTime": "2018-01-24T00:06:34Z", "eventSource": "kms.amazonaws.com", "eventName": "Decrypt", "awsRegion": "us-west-2", "sourceIPAddress": "dynamodb.amazonaws.com", "userAgent": "dynamodb.amazonaws.com", "requestParameters": { "encryptionContext": { "aws:dynamodb:tableName": "reg-users", "aws:dynamodb:subscriberId": "1234567890" } }, "responseElements": null, "requestID": "7072def1-009a-11e8-9ab9-4504c26bd391", "eventID": "3698678a-d04e-48c7-96f2-3d734c5c7903", "readOnly": true, "resources": [ { "ARN": "arn:aws:kms:us-west-2:1234567890:key/e7bd721d-37f3-4acd-bec5-4d08c765f9f5", "accountId": "1234567890", "type": "AWS::KMS::Key" } ] }

Available Now
This feature is available now in the US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) Regions and you can start using it today.

There’s no charge for the encryption; you will be charged for the calls that DynamoDB makes to AWS KMS on your behalf.

Jeff;

 

Categories: Cloud

Give Your WordPress Blog a Voice With Our New Amazon Polly Plugin

AWS Blog - Thu, 02/08/2018 - 05:42

I first told you about Polly in late 2016 in my post Amazon Polly – Text to Speech in 47 Voices and 24 Languages. After that AWS re:Invent launch, we added support for Korean, five new voices, and made Polly available in all Regions in the aws partition. We also added whispering, speech marks, a timbre effect, and dynamic range compression.

New WordPress Plugin
Today we are launching a WordPress plugin that uses Polly to create high-quality audio versions of your blog posts. You can access the audio from within the post or in podcast form using a feature that we call Amazon Pollycast! Both options make your content more accessible and can help you to reach a wider audience. This plugin was a joint effort between the AWS team our friends at AWS Advanced Technology Partner WP Engine.

As you will see, the plugin is easy to install and configure. You can use it with installations of WordPress that you run on your own infrastructure or on AWS. Either way, you have access to all of Polly’s voices along with a wide variety of configuration options. The generated audio (an MP3 file for each post) can be stored alongside your WordPress content, or in Amazon Simple Storage Service (S3), with optional support for content distribution via Amazon CloudFront.

Installing the Plugin
I did not have an existing WordPress-powered blog, so I begin by launching a Lightsail instance using the WordPress 4.8.1 blueprint:

Then I follow these directions to access my login credentials:

Credentials in hand, I log in to the WordPress Dashboard:

The plugin makes calls to AWS, and needs to have credentials in order to do so. I hop over to the IAM Console and created a new policy. The policy allows the plugin to access a carefully selected set of S3 and Polly functions (find the full policy in the README):

Then I create an IAM user (wp-polly-user). I enter the name and indicate that it will be used for Programmatic Access:

Then I attach the policy that I just created, and click on Review:

I review my settings (not shown) and then click on Create User. Then I copy the two values (Access Key ID and Secret Access Key) into a secure location. Possession of these keys allows the bearer to make calls to AWS so I take care not to leave them lying around.

Now I am ready to install the plugin! I go back to the WordPress Dashboard and click on Add New in the Plugins menu:

Then I click on Upload Plugin and locate the ZIP file that I downloaded from the WordPress Plugins site. After I find it I click on Install Now to proceed:

WordPress uploads and installs the plugin. Now I click on Activate Plugin to move ahead:

With the plugin installed, I click on Settings to set it up:

I enter my keys and click on Save Changes:

The General settings let me control the sample rate, voice, player position, the default setting for new posts, and the autoplay option. I can leave all of the settings as-is to get started:

The Cloud Storage settings let me store audio in S3 and to use CloudFront to distribute the audio:

The Amazon Pollycast settings give me control over the iTunes parameters that are included in the generated RSS feed:

Finally, the Bulk Update button lets me regenerate all of the audio files after I change any of the other settings:

With the plugin installed and configured, I can create a new post. As you can see, the plugin can be enabled and customized for each post:

I can see how much it will cost to convert to audio with a click:

When I click on Publish, the plugin breaks the text into multiple blocks on sentence boundaries, calls the Polly SynthesizeSpeech API for each block, and accumulates the resulting audio in a single MP3 file. The published blog post references the file using the <audio> tag. Here’s the post:

I can’t seem to use an <audio> tag in this post, but you can download and play the MP3 file yourself if you’d like.

The Pollycast feature generates an RSS file with links to an MP3 file for each post:

Pricing
The plugin will make calls to Amazon Polly each time the post is saved or updated. Pricing is based on the number of characters in the speech requests, as described on the Polly Pricing page. Also, the AWS Free Tier lets you process up to 5 million characters per month at no charge, for a period of one year that starts when you make your first call to Polly.

Going Further
The plugin is available on GitHub in source code form and we are looking forward to your pull requests! Here are a couple of ideas to get you started:

Voice Per Author – Allow selection of a distinct Polly voice for each author.

Quoted Text – For blogs that make frequent use of embedded quotes, use a distinct voice for the quotes.

Translation – Use Amazon Translate to translate the texts into another language, and then use Polly to generate audio in that language.

Other Blogging Engines – Build a similar plugin for your favorite blogging engine.

SSML Support – Figure out an interesting way to use Polly’s SSML tags to add additional character to the audio.

Let me know what you come up with!

Jeff;

 

Categories: Cloud

New AWS Developer Training in Collaboration with edX.org

AWS Blog - Mon, 02/05/2018 - 11:59

I recently heard my manager (Ariel Kelman, VP of Marketing for AWS) talk about the important role that education plays in our work. In fact, he assigned it a significantly higher priority than traditional marketing activities that focus on leads or conversions. I’ve also heard our other leaders talk about their work to create highly scalable education programs that will allow developers, architects, and other IT professionals to improve their skills and to earn AWS Certifications.

AWS Developer Professional Series
Today I would like to tell you about the new AWS Developer Professional Series. The AWS Training and Certification team has teamed up with edX to create this new three-part series. Founded by MIT and Harvard, edX is the leading non-profit online learning destination, with a global community of over 14 million learners, backed up by 130 global partners including universities, non-profits, and institutions. This collaboration expands our offerings, and gives you another training option!

The new series is designed to help you and your colleagues to build development and DevOps skills on AWS. The courses are self-paced and build on each other in order to help you to create Python applications that run on AWS by way of the AWS SDK for Python (also known as Boto). Here are the courses:

AWS Developer: Building on AWS – This course will give you an introduction to AWS services and to the AWS SDKs. You’ll create and manage an AWS account, learn about Regions, AZs, and VPCs, and install SDKs. Then you will learn how to launch Amazon Elastic Compute Cloud (EC2) instances, set up AWS Lambda functions, and use managed services such as Amazon Relational Database Service (RDS). You’ll also learn how to use our AI services for image analysis and text-to-speech, and wrap up by focusing on availability and durability.

AWS Developer: Deploying on AWS – This course will teach you about the concepts and practices that allow you practice DevOps on AWS. You will learn how to use developer tools like AWS CodeBuild and AWS CodeDeploy, while monitoring your development and production environments using Amazon CloudWatch.

AWS Developer: Optimizing on AWS – This course focuses on performance optimization and tuning of the application that you built in the predecessor courses. You will learn how to use caching and content distribution to increase performance and to improve the end-user experience for your app. You’ll also learn how to use AWS Key Management Service (KMS) to encrypt data at rest and in transit.

The courses are built with the expectation that you already have one to three years of software development experience, including some Python skills. Each course runs for six weeks and requires three to four hours of work per week on your part. Courses start in February (Building), April (Deploying), and May (Optimizing), and you can enroll now at no charge. You can also pursue a Verified Certificate for a fee of $149 per course.

Jeff;

.

Categories: Cloud

The Floodgates Are Open – Increased Network Bandwidth for EC2 Instances

AWS Blog - Fri, 01/26/2018 - 16:53

I hope that you have configured your AMIs and your current-generation EC2 instances to use the Elastic Network Adapter (ENA) that I told you about back in mid-2016. The ENA gives you high throughput and low latency, while minimizing the load on the host processor. It is designed to work well in the presence of multiple vCPUs, with intelligent packet routing backed up by multiple transmit and receive queues.

Today we are opening up the floodgates and giving you access to more bandwidth in all AWS Regions. Here are the specifics (in each case, the actual bandwidth is dependent on the instance type and size):

EC2 to S3 – Traffic to and from Amazon Simple Storage Service (S3) can now take advantage of up to 25 Gbps of bandwidth. Previously, traffic of this type had access to 5 Gbps of bandwidth. This will be of benefit to applications that access large amounts of data in S3 or that make use of S3 for backup and restore.

EC2 to EC2 – Traffic to and from EC2 instances in the same or different Availability Zones within a region can now take advantage of up to 5 Gbps of bandwidth for single-flow traffic, or 25 Gbps of bandwidth for multi-flow traffic (a flow represents a single, point-to-point network connection) by using private IPv4 or IPv6 addresses, as described here.

EC2 to EC2 (Cluster Placement Group) – Traffic to and from EC2 instances within a cluster placement group can continue to take advantage of up to 10 Gbps of lower-latency bandwidth for single-flow traffic, or 25 Gbps of lower-latency bandwidth for multi-flow traffic.

To take advantage of this additional bandwidth, make sure that you are using the latest, ENA-enabled AMIs on current-generation EC2 instances. ENA-enabled AMIs are available for Amazon Linux, Ubuntu 14.04 & 16.04, RHEL 7.4, SLES 12, and Windows Server (2008 R2, 2012, 2012 R2, and 2016). The FreeBSD AMI in AWS Marketplace is also ENA-enabled, as is VMware Cloud on AWS.

Jeff;

Categories: Cloud

New – Inter-Region VPC Peering

AWS Blog - Mon, 01/22/2018 - 11:09

I’m still catching up with the last couple of AWS re:Invent launches!

Today I would like to tell you about inter-region VPC peering. You have been able to create peering connections between Virtual Private Clouds (VPCs) in the same AWS Region since early 2014 (read New VPC Peering for the Amazon Virtual Cloud to learn more). Once established, EC2 instances in the peered VPCs can communicate with each other across the peering connection using their private IP addresses, just as if they were on the same network.

At re:Invent we extended the peering model so that it works across AWS Regions. Like the existing model, it also works within the same AWS account or across a pair of accounts. All of the use cases that I listed in my earlier post still apply; you can centralize shared resources in an organization-wide VPC and then peer it with multiple, per-department VPCs. You can also share resources between members of a consortium, conglomerate, or joint venture.

Inter-region VPC peering also allows you to take advantage of the high degree of isolation that exists between AWS Regions while building highly functional applications that span Regions. For example, you can choose geographic locations for your compute and storage resources that will help you to comply with regulatory requirements and other constraints.

Peering Details
This feature is currently enabled in the US East (Northern Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) Regions and for IPv4 traffic. You can connect any two VPCs in these Regions, as long as they have distinct, non-overlapping CIDR blocks. This ensures that all of the private IP addresses are unique and allows all of the resources in the pair of VPCs to address each other without the need for any form of network address translation.

Connections are requested by sending an invitation from one VPC to the other and the invitation must be accepted in order to establish the connection. You can set up a peering connection using the AWS Management Console, the VPC APIs, the AWS Command Line Interface (CLI), or the AWS Tools for Windows PowerShell.

Data that passes between VPCs in distinct regions flows across the AWS global network in encrypted form. The data is encrypted in AEAD fashion using a modern algorithm and AWS-supplied keys that are managed and rotated automatically. The same key is used to encrypt traffic for all peering connections; this makes all traffic, regardless of customer, look the same. This anonymity provides additional protection in situations where your inter-VPC traffic is intermittent.

Setting up Inter-Region Peering
Here’s how I set up peering between two of my VPCs. I’ll start with a VPC in US East (Northern Virginia) and request peering with a VPC in US East (Ohio). I start by noting the ID (vpc-acd8ccc5) of the VPC in Ohio:

Then I switch to the US East (Northern Virginia) Region, click on Create Peering Connection, and choose to peer with the VPC in Ohio. I enter the Id and click on Create Peering Connection to proceed:

This creates a peering request:

I switch to the other Region and accept the pending request:

Now I need to arrange to route IPv4 traffic between the two VPCs by creating route table entries in each one. I can edit the main route table or one associated with a particular VPC subnet. Here’s how I arrange to route traffic from Virginia to Ohio:

And here’s how I route it from Ohio to Virginia:

To learn more about how to do this, read Updating Your Route Tables for a VPC Peering Connection.

The private DNS names for EC2 instances (ip-10-90-211-18.ec2.internal and the like) will not resolve across a peering connection. If you need to refer to EC2 instances and other AWS resources in other VPCs, consider creating a Private Hosted Zone using Amazon Route 53:

Unlike VPC peering within a single region, you cannot reference security groups across Inter-Region VPC Peering. Also, jumbo frames cannot be send between regions.

Jeff;

 

Categories: Cloud

Recent EC2 Goodies – Launch Templates and Spread Placement

AWS Blog - Fri, 01/19/2018 - 15:50

We launched some important new EC2 instance types and features at AWS re:Invent. I’ve already told you about the M5, H1, T2 Unlimited and Bare Metal instances, and about Spot features such as Hibernation and the New Pricing Model. Randall told you about the Amazon Time Sync Service. Today I would like to tell you about two of the features that we launched: Spread placement groups and Launch Templates. Both features are available in the EC2 Console and from the EC2 APIs, and can be used in all of the AWS Regions in the “aws” partition:

Launch Templates
You can use launch templates to store the instance, network, security, storage, and advanced parameters that you use to launch EC2 instances, and can also include any desired tags. Each template can include any desired subset of the full collection of parameters. You can, for example, define common configuration parameters such as tags or network configurations in a template, and allow the other parameters to be specified as part of the actual launch.

Templates give you the power to set up a consistent launch environment that spans instances launched in On-Demand and Spot form, as well as through EC2 Auto Scaling and as part of a Spot Fleet. You can use them to implement organization-wide standards and to enforce best practices, and you can give your IAM users the ability to launch instances via templates while withholding the ability to do so via the underlying APIs.

Templates are versioned and you can use any desired version when you launch an instance. You can create templates from scratch, base them on the previous version, or copy the parameters from a running instance.

Here’s how you create a launch template in the Console:

Here’s how to include network interfaces, storage volumes, tags, and security groups:

And here’s how to specify advanced and specialized parameters:

You don’t have to specify values for all of these parameters in your templates; enter the values that are common to multiple instances or launches and specify the rest at launch time.

When you click Create launch template, the template is created and can be used to launch On-Demand instances, create Auto Scaling Groups, and create Spot Fleets:

The Launch Instance button now gives you the option to launch from a template:

Simply choose the template and the version, and finalize all of the launch parameters:

You can also manage your templates and template versions from the Console:

To learn more about this feature, read Launching an Instance from a Launch Template.

Spread Placement Groups
Spread placement groups indicate that you do not want the instances in the group to share the same underlying hardware. Applications that rely on a small number of critical instances can launch them in a spread placement group to reduce the odds that one hardware failure will impact more than one instance. Here are a couple of things to keep in mind when you use spread placement groups:

  • Availability Zones – A single spread placement group can span multiple Availability Zones. You can have a maximum of seven running instances per Availability Zone per group.
  • Unique Hardware – Launch requests can fail if there is insufficient unique hardware available. The situation changes over time as overall usage changes and as we add additional hardware; you can retry failed requests at a later time.
  • Instance Types – You can launch a wide variety of M4, M5, C3, R3, R4, X1, X1e, D2, H1, I2, I3, HS1, F1, G2, G3, P2, and P3 instances types in spread placement groups.
  • Reserved Instances – Instances launched into a spread placement group can make use of reserved capacity. However, you cannot currently reserve capacity for a placement group and could receive an ICE (Insufficient Capacity Error) even if you have some RI’s available.
  • Applicability – You cannot use spread placement groups in conjunction with Dedicated Instances or Dedicated Hosts.

You can create and use spread placement groups from the AWS Management Console, the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, and the AWS SDKs. The console has a new feature that will help you to learn how to use the command line:

You can specify an existing placement group or create a new one when you launch an EC2 instance:

To learn more, read about Placement Groups.

Jeff;

Categories: Cloud

New AWS Auto Scaling – Unified Scaling For Your Cloud Applications

AWS Blog - Tue, 01/16/2018 - 17:50

I’ve been talking about scalability for servers and other cloud resources for a very long time! Back in 2006, I wrote “This is the new world of scalable, on-demand web services. Pay for what you need and use, and not a byte more.” Shortly after we launched Amazon Elastic Compute Cloud (EC2), we made it easy for you to do this with the simultaneous launch of Elastic Load Balancing, EC2 Auto Scaling, and Amazon CloudWatch. Since then we have added Auto Scaling to other AWS services including ECS, Spot Fleets, DynamoDB, Aurora, AppStream 2.0, and EMR. We have also added features such as target tracking to make it easier for you to scale based on the metric that is most appropriate for your application.

Introducing AWS Auto Scaling
Today we are making it easier for you to use the Auto Scaling features of multiple AWS services from a single user interface with the introduction of AWS Auto Scaling. This new service unifies and builds on our existing, service-specific, scaling features. It operates on any desired EC2 Auto Scaling groups, EC2 Spot Fleets, ECS tasks, DynamoDB tables, DynamoDB Global Secondary Indexes, and Aurora Replicas that are part of your application, as described by an AWS CloudFormation stack or in AWS Elastic Beanstalk (we’re also exploring some other ways to flag a set of resources as an application for use with AWS Auto Scaling).

You no longer need to set up alarms and scaling actions for each resource and each service. Instead, you simply point AWS Auto Scaling at your application and select the services and resources of interest. Then you select the desired scaling option for each one, and AWS Auto Scaling will do the rest, helping you to discover the scalable resources and then creating a scaling plan that addresses the resources of interest.

If you have tried to use any of our Auto Scaling options in the past, you undoubtedly understand the trade-offs involved in choosing scaling thresholds. AWS Auto Scaling gives you a variety of scaling options: You can optimize for availability, keeping plenty of resources in reserve in order to meet sudden spikes in demand. You can optimize for costs, running close to the line and accepting the possibility that you will tax your resources if that spike arrives. Alternatively, you can aim for the middle, with a generous but not excessive level of spare capacity. In addition to optimizing for availability, cost, or a blend of both, you can also set a custom scaling threshold. In each case, AWS Auto Scaling will create scaling policies on your behalf, including appropriate upper and lower bounds for each resource.

AWS Auto Scaling in Action
I will use AWS Auto Scaling on a simple CloudFormation stack consisting of an Auto Scaling group of EC2 instances and a pair of DynamoDB tables. I start by removing the existing Scaling Policies from my Auto Scaling group:

Then I open up the new Auto Scaling Console and selecting the stack:

Behind the scenes, Elastic Beanstalk applications are always launched via a CloudFormation stack. In the screen shot above, awseb-e-sdwttqizbp-stack is an Elastic Beanstalk application that I launched.

I can click on any stack to learn more about it before proceeding:

I select the desired stack and click on Next to proceed. Then I enter a name for my scaling plan and choose the resources that I’d like it to include:

I choose the scaling strategy for each type of resource:

After I have selected the desired strategies, I click Next to proceed. Then I review the proposed scaling plan, and click Create scaling plan to move ahead:

The scaling plan is created and in effect within a few minutes:

I can click on the plan to learn more:

I can also inspect each scaling policy:

I tested my new policy by applying a load to the initial EC2 instance, and watched the scale out activity take place:

I also took a look at the CloudWatch metrics for the EC2 Auto Scaling group:

Available Now
We are launching AWS Auto Scaling today in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Singapore) Regions today, with more to follow. There’s no charge for AWS Auto Scaling; you pay only for the CloudWatch Alarms that it creates and any AWS resources that you consume.

As is often the case with our new services, this is just the first step on what we hope to be a long and interesting journey! We have a long roadmap, and we’ll be adding new features and options throughout 2018 in response to your feedback.

Jeff;

Categories: Cloud

Now Open – Third AWS Availability Zone in London

AWS Blog - Mon, 01/15/2018 - 16:00

We expand AWS by picking a geographic area (which we call a Region) and then building multiple, isolated Availability Zones in that area. Each Availability Zone (AZ) has multiple Internet connections and power connections to multiple grids.

Today I am happy to announce that we are opening our 50th AWS Availability Zone, with the addition of a third AZ to the EU (London) Region. This will give you additional flexibility to architect highly scalable, fault-tolerant applications that run across multiple AZs in the UK.

Since launching the EU (London) Region, we have seen an ever-growing set of customers, particularly in the public sector and in regulated industries, use AWS for new and innovative applications. Here are a couple of examples, courtesy of my AWS colleagues in the UK:

Enterprise – Some of the UK’s most respected enterprises are using AWS to transform their businesses, including BBC, BT, Deloitte, and Travis Perkins. Travis Perkins is one of the largest suppliers of building materials in the UK and is implementing the biggest systems and business change in its history, including an all-in migration of its data centers to AWS.

Startups – Cross-border payments company Currencycloud has migrated its entire payments production, and demo platform to AWS resulting in a 30% saving on their infrastructure costs. Clearscore, with plans to disrupting the credit score industry, has also chosen to host their entire platform on AWS. UnderwriteMe is using the EU (London) Region to offer an underwriting platform to their customers as a managed service.

Public Sector -The Met Office chose AWS to support the Met Office Weather App, available for iPhone and Android phones. Since the Met Office Weather App went live in January 2016, it has attracted more than half a million users. Using AWS, the Met Office has been able to increase agility, speed, and scalability while reducing costs. The Driver and Vehicle Licensing Agency (DVLA) is using the EU (London) Region for services such as the Strategic Card Payments platform, which helps the agency achieve PCI DSS compliance.

The AWS EU (London) Region has achieved Public Services Network (PSN) assurance, which provides UK Public Sector customers with an assured infrastructure on which to build UK Public Sector services. In conjunction with AWS’s Standardized Architecture for UK-OFFICIAL, PSN assurance enables UK Public Sector organizations to move their UK-OFFICIAL classified data to the EU (London) Region in a controlled and risk-managed manner.

For a complete list of AWS Regions and Services, visit the AWS Global Infrastructure page. As always, pricing for services in the Region can be found on the detail pages; visit our Cloud Products page to get started.

Jeff;

Categories: Cloud

AWS IoT, Greengrass, and Machine Learning for Connected Vehicles at CES

AWS Blog - Wed, 01/10/2018 - 12:12

Last week I attended a talk given by Bryan Mistele, president of Seattle-based INRIX. Bryan’s talk provided a glimpse into the future of transportation, centering around four principle attributes, often abbreviated as ACES:

Autonomous – Cars and trucks are gaining the ability to scan and to make sense of their environments and to navigate without human input.

Connected – Vehicles of all types have the ability to take advantage of bidirectional connections (either full-time or intermittent) to other cars and to cloud-based resources. They can upload road and performance data, communicate with each other to run in packs, and take advantage of traffic and weather data.

Electric – Continued development of battery and motor technology, will make electrics vehicles more convenient, cost-effective, and environmentally friendly.

Shared – Ride-sharing services will change usage from an ownership model to an as-a-service model (sound familiar?).

Individually and in combination, these emerging attributes mean that the cars and trucks we will see and use in the decade to come will be markedly different than those of the past.

On the Road with AWS
AWS customers are already using our AWS IoT, edge computing, Amazon Machine Learning, and Alexa products to bring this future to life – vehicle manufacturers, their tier 1 suppliers, and AutoTech startups all use AWS for their ACES initiatives. AWS Greengrass is playing an important role here, attracting design wins and helping our customers to add processing power and machine learning inferencing at the edge.

AWS customer Aptiv (formerly Delphi) talked about their Automated Mobility on Demand (AMoD) smart vehicle architecture in a AWS re:Invent session. Aptiv’s AMoD platform uses Greengrass and microservices to drive the onboard user experience, along with edge processing, monitoring, and control. Here’s an overview:

Another customer, Denso of Japan (one of the world’s largest suppliers of auto components and software) is using Greengrass and AWS IoT to support their vision of Mobility as a Service (MaaS). Here’s a video:

AWS at CES
The AWS team will be out in force at CES in Las Vegas and would love to talk to you. They’ll be running demos that show how AWS can help to bring innovation and personalization to connected and autonomous vehicles.

Personalized In-Vehicle Experience – This demo shows how AWS AI and Machine Learning can be used to create a highly personalized and branded in-vehicle experience. It makes use of Amazon Lex, Polly, and Amazon Rekognition, but the design is flexible and can be used with other services as well. The demo encompasses driver registration, login and startup (including facial recognition), voice assistance for contextual guidance, personalized e-commerce, and vehicle control. Here’s the architecture for the voice assistance:

Connected Vehicle Solution – This demo shows how a connected vehicle can combine local and cloud intelligence, using edge computing and machine learning at the edge. It handles intermittent connections and uses AWS DeepLens to train a model that responds to distracted drivers. Here’s the overall architecture, as described in our Connected Vehicle Solution:

Digital Content Delivery – This demo will show how a customer uses a web-based 3D configurator to build and personalize their vehicle. It will also show high resolution (4K) 3D image and an optional immersive AR/VR experience, both designed for use within a dealership.

Autonomous Driving – This demo will showcase the AWS services that can be used to build autonomous vehicles. There’s a 1/16th scale model vehicle powered and driven by Greengrass and an overview of a new AWS Autonomous Toolkit. As part of the demo, attendees drive the car, training a model via Amazon SageMaker for subsequent on-board inferencing, powered by Greengrass ML Inferencing.

To speak to one of my colleagues or to set up a time to see the demos, check out the Visit AWS at CES 2018 page.

Some Resources
If you are interested in this topic and want to learn more, the AWS for Automotive page is a great starting point, with discussions on connected vehicles & mobility, autonomous vehicle development, and digital customer engagement.

When you are ready to start building a connected vehicle, the AWS Connected Vehicle Solution contains a reference architecture that combines local computing, sophisticated event rules, and cloud-based data processing and storage. You can use this solution to accelerate your own connected vehicle projects.

Jeff;

Categories: Cloud

AWS Online Tech Talks – January 2018

AWS Blog - Mon, 01/08/2018 - 11:31

Happy New Year! Kick of 2018 right by expanding your AWS knowledge with a great batch of new Tech Talks. We’re covering some of the biggest launches from re:Invent including Amazon Neptune, Amazon Rekognition Video, AWS Fargate, AWS Cloud9, Amazon Kinesis Video Streams, AWS PrivateLink, AWS Single-Sign On and more!

January 2018– Schedule

Noted below are the upcoming scheduled live, online technical sessions being held during the month of January. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.

Webinars featured this month are:

Monday January 22

Analytics & Big Data
11:00 AM – 11:45 AM PT Analyze your Data Lake, Fast @ Any Scale  Lvl 300

Database
01:00 PM – 01:45 PM PT Deep Dive on Amazon Neptune Lvl 200

Tuesday, January 23

Artificial Intelligence
9:00 AM – 09:45 AM PT  How to get the most out of Amazon Rekognition Video, a deep learning based video analysis service Lvl 300

Containers

11:00 AM – 11:45 AM Introducing AWS Fargate Lvl 200

Serverless
01:00 PM – 02:00 PM PT Overview of Serverless Application Deployment Patterns Lvl 400

Wednesday, January 24

DevOps
09:00 AM – 09:45 AM PT Introducing AWS Cloud9  Lvl 200

Analytics & Big Data
11:00 AM – 11:45 AM PT Deep Dive: Amazon Kinesis Video Streams
Lvl 300
Database
01:00 PM – 01:45 PM PT Introducing Amazon Aurora with PostgreSQL Compatibility Lvl 200

Thursday, January 25

Artificial Intelligence
09:00 AM – 09:45 AM PT Introducing Amazon SageMaker Lvl 200

Mobile
11:00 AM – 11:45 AM PT Ionic and React Hybrid Web/Native Mobile Applications with Mobile Hub Lvl 200

IoT
01:00 PM – 01:45 PM PT Connected Product Development: Secure Cloud & Local Connectivity for Microcontroller-based Devices Lvl 200

Monday, January 29

Enterprise
11:00 AM – 11:45 AM PT Enterprise Solutions Best Practices 100 Achieving Business Value with AWS Lvl 100

Compute
01:00 PM – 01:45 PM PT Introduction to Amazon Lightsail Lvl 200

Tuesday, January 30

Security, Identity & Compliance
09:00 AM – 09:45 AM PT Introducing Managed Rules for AWS WAF Lvl 200

Storage
11:00 AM – 11:45 AM PT  Improving Backup & DR – AWS Storage Gateway Lvl 300

Compute
01:00 PM – 01:45 PM PT  Introducing the New Simplified Access Model for EC2 Spot Instances Lvl 200

Wednesday, January 31

Networking
09:00 AM – 09:45 AM PT  Deep Dive on AWS PrivateLink Lvl 300

Enterprise
11:00 AM – 11:45 AM PT Preparing Your Team for a Cloud Transformation Lvl 200

Compute
01:00 PM – 01:45 PM PT  The Nitro Project: Next-Generation EC2 Infrastructure Lvl 300

Thursday, February 1

Security, Identity & Compliance
09:00 AM – 09:45 AM PT  Deep Dive on AWS Single Sign-On Lvl 300

Storage
11:00 AM – 11:45 AM PT How to Build a Data Lake in Amazon S3 & Amazon Glacier Lvl 300

Categories: Cloud

AWS Direct Connect Update – Ten New Locations Added in Late 2017

AWS Blog - Tue, 01/02/2018 - 10:34

Happy 2018! I am looking forward to getting back to my usual routine, working with our teams to learn about their upcoming launches and then writing blog posts to bring the news to you. Right now I am still catching up on a few launches and announcements from late 2017.

First on the list for today is our most recent round of new cities for AWS Direct Connect. AWS customers all over the world use Direct Connect to create dedicated network connections from their premises to AWS in order to reduce their network costs, increase throughput, and to pursue a more consistent network experience.

We added ten new locations to our Direct Connect roster in December, all of which offer both 1 Gbps and 10 Gbps connectivity, along with partner-supplied options for speeds below 1 Gbps. Here are the newest locations, along withe the data centers and associated AWS Regions:

  • Bangalore, India – NetMagic DC2 – Asia Pacific (Mumbai).
  • Cape Town, South Africa – Teraco Ct1 – EU (Ireland).
  • Johannesburg, South Africa – Teraco JB1 – EU (Ireland).
  • London, UK – Telehouse North Two – EU (London).
  • Miami, Florida, US – Equinix MI1 – US East (Northern Virginia).
  • Minneapolis, Minnesota, US – Cologix MIN3 – US East (Ohio)
  • Ningxia, China – Shapotou IDC – China (Ningxia).
  • Ningxia, China – Industrial Park IDC – China (Ningxia).
  • Rio de Janeiro, Brazil – Equinix RJ2– South America (São Paulo).
  • Tokyo, Japan – AT Tokyo Chuo – Asia Pacific (Tokyo).

You can use these new locations in conjunction with the AWS Direct Connect Gateway to set up connectivity that spans Virtual Private Clouds (VPCs) spread across multiple AWS Regions (this does not apply to the AWS Regions in China).

If you are interested in putting Direct Connect to use, be sure to check out our ever-growing list of Direct Connect Partners.

Jeff;

Categories: Cloud

AWS Training & Certification Update – Free Digital Training + Certified Cloud Practitioner Exam

AWS Blog - Thu, 12/21/2017 - 10:44

We recently made some updates to AWS Training and Certification to make it easier for you to build your cloud skills and to learn about many of the new services that we launched at AWS re:Invent.

Free AWS Digital Training
You can now find over 100 new digital training classes at aws.training, all with unlimited access at no charge.

The courses were built by AWS experts and allow you to learn AWS at your own pace, helping you to build foundational knowledge for dozens of AWS services and solutions. You can also access some more advanced training on Machine Learning and Storage.

Here are some of the new digital training topics:

You can browse through the available topics, enroll in one that interests you, watch it, and track your progress by looking at your transcript:

AWS Certified Cloud Practitioner
Our newest certification exam, AWS Certified Cloud Practitioner, lets you validate your overall understanding of the AWS Cloud with an industry-recognized credential. It covers four domains: cloud concepts, security, technology, and billing and pricing. We recommend that you have at least six months of experience (or equivalent training) with the AWS Cloud in any role, including technical, managerial, sales, purchasing, or financial.

To help you prepare for this exam, take our new AWS Cloud Practitioner Essentials course , one of the new AWS digital training courses. This course will give you an overview of cloud concepts, AWS services, security, architecture, pricing, and support. In addition to helping you validate your overall understanding of the AWS Cloud, AWS Certified Cloud Practitioner also serves as a new prerequisite option for the Big Data Specialty and Advanced Networking Specialty certification exams.

Go For It!
I’d like to encourage you to check out aws.training and to enroll in our free digital training in order to learn more about AWS and our newest services. You can strengthen your skills, add to your knowledge base, and set a goal of earning your AWS Certified Cloud Practitioner certification in the new year.

Jeff;

Categories: Cloud

Amazon Linux 2 – Modern, Stable, and Enterprise-Friendly

AWS Blog - Tue, 12/19/2017 - 14:45

I’m getting ready to wrap up my work for the year, cleaning up my inbox and catching up on a few recent AWS launches that happened at and shortly after AWS re:Invent.

Last week we launched Amazon Linux 2. This is modern version of Linux, designed to meet the security, stability, and productivity needs of enterprise environments while giving you timely access to new tools and features. It also includes all of the things that made the Amazon Linux AMI popular, including AWS integration, cloud-init, a secure default configuration, regular security updates, and AWS Support. From that base, we have added many new features including:

Long-Term Support – You can use Amazon Linux 2 in situations where you want to stick with a single major version of Linux for an extended period of time, perhaps to avoid re-qualifying your applications too frequently. This build (2017.12) is a candidate for LTS status; the final determination will be made based on feedback in the Amazon Linux Discussion Forum. Long-term support for the Amazon Linux 2 LTS build will include security updates, bug fixes, user-space Application Binary Interface (ABI), and user-space Application Programming Interface (API) compatibility for 5 years.

Extras Library – You can now get fast access to fresh, new functionality while keeping your base OS image stable and lightweight. The Amazon Linux Extras Library eliminates the age-old tradeoff between OS stability and access to fresh software. It contains open source databases, languages, and more, each packaged together with any needed dependencies.

Tuned Kernel – You have access to the latest 4.9 LTS kernel, with support for the latest EC2 features and tuned to run efficiently in AWS and other virtualized environments.

Systemd – Amazon Linux 2 includes the systemd init system, designed to provide better boot performance and increased control over individual services and groups of interdependent services. For example, you can indicate that Service B must be started only after Service A is fully started, or that Service C should start on a change in network connection status.

Wide Availabilty – Amazon Linux 2 is available in all AWS Regions in AMI and Docker image form. Virtual machine images for Hyper-V, KVM, VirtualBox, and VMware are also available. You can build and test your applications on your laptop or in your own data center and then deploy them to AWS.

Launching an Instance
You can launch an instance in all of the usual ways – AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, RunInstances, and via a AWS CloudFormation template. I’ll use the Console:

I’m interested in the Extras Library; here’s how I see which topics (lists of packages) are available:

As you can see, the library includes languages, editors, and web tools that receive frequent updates. Each topic contains all of dependencies that are needed to install the package on Amazon Linux 2. For example, the Rust topic includes the cmake build system for Rust, cargo for Rust package maintenance, and the LLVM-based compiler toolchain for Rust.

Here’s how I install a topic (Emacs 25.3):

SNS Updates
Many AWS customers use the Amazon Linux AMIs as a starting point for their own AMIs. If you do this and would like to kick off your build process whenever a new AMI is released, you can subscribe to an SNS topic:

You can be notified by email, invoke a AWS Lambda function, and so forth.

Available Now
Amazon Linux 2 is available now and you can start using it in the cloud and on-premises today! To learn more, read the Amazon Linux 2 LTS Candidate (2017.12) Release Notes.

Jeff;

 

Categories: Cloud

Now Open AWS EU (Paris) Region

AWS Blog - Mon, 12/18/2017 - 20:45

Today we are launching our 18th AWS Region, our fourth in Europe. Located in the Paris area, AWS customers can use this Region to better serve customers in and around France.

The Details
The new EU (Paris) Region provides a broad suite of AWS services including Amazon API Gateway, Amazon Aurora, Amazon CloudFront, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, Amazon DynamoDB, Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon ECS, Amazon Elastic Block Store (EBS), Amazon EMR, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon Glacier, Amazon Kinesis Streams, Polly, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon Virtual Private Cloud, Auto Scaling, AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, AWS Elastic Beanstalk, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS), AWS Lambda, AWS Marketplace, AWS OpsWorks Stacks, AWS Personal Health Dashboard, AWS Server Migration Service, AWS Service Catalog, AWS Shield Standard, AWS Snowball, AWS Snowball Edge, AWS Snowmobile, AWS Storage Gateway, AWS Support (including AWS Trusted Advisor), Elastic Load Balancing, and VM Import.

The Paris Region supports all sizes of C5, M5, R4, T2, D2, I3, and X1 instances.

There are also four edge locations for Amazon Route 53 and Amazon CloudFront: three in Paris and one in Marseille, all with AWS WAF and AWS Shield. Check out the AWS Global Infrastructure page to learn more about current and future AWS Regions.

The Paris Region will benefit from three AWS Direct Connect locations. Telehouse Voltaire is available today. AWS Direct Connect will also become available at Equinix Paris in early 2018, followed by Interxion Paris.

All AWS infrastructure regions around the world are designed, built, and regularly audited to meet the most rigorous compliance standards and to provide high levels of security for all AWS customers. These include ISO 27001, ISO 27017, ISO 27018, SOC 1 (Formerly SAS 70), SOC 2 and SOC 3 Security & Availability, PCI DSS Level 1, and many more. This means customers benefit from all the best practices of AWS policies, architecture, and operational processes built to satisfy the needs of even the most security sensitive customers.

AWS is certified under the EU-US Privacy Shield, and the AWS Data Processing Addendum (DPA) is GDPR-ready and available now to all AWS customers to help them prepare for May 25, 2018 when the GDPR becomes enforceable. The current AWS DPA, as well as the AWS GDPR DPA, allows customers to transfer personal data to countries outside the European Economic Area (EEA) in compliance with European Union (EU) data protection laws. AWS also adheres to the Cloud Infrastructure Service Providers in Europe (CISPE) Code of Conduct. The CISPE Code of Conduct helps customers ensure that AWS is using appropriate data protection standards to protect their data, consistent with the GDPR. In addition, AWS offers a wide range of services and features to help customers meet the requirements of the GDPR, including services for access controls, monitoring, logging, and encryption.

From Our Customers
Many AWS customers are preparing to use this new Region. Here’s a small sample:

Societe Generale, one of the largest banks in France and the world, has accelerated their digital transformation while working with AWS. They developed SG Research, an application that makes reports from Societe Generale’s analysts available to corporate customers in order to improve the decision-making process for investments. The new AWS Region will reduce latency between applications running in the cloud and in their French data centers.

SNCF is the national railway company of France. Their mobile app, powered by AWS, delivers real-time traffic information to 14 million riders. Extreme weather, traffic events, holidays, and engineering works can cause usage to peak at hundreds of thousands of users per second. They are planning to use machine learning and big data to add predictive features to the app.

Radio France, the French public radio broadcaster, offers seven national networks, and uses AWS to accelerate its innovation and stay competitive.

Les Restos du Coeur, a French charity that provides assistance to the needy, delivering food packages and participating in their social and economic integration back into French society. Les Restos du Coeur is using AWS for its CRM system to track the assistance given to each of their beneficiaries and the impact this is having on their lives.

AlloResto by JustEat (a leader in the French FoodTech industry), is using AWS to to scale during traffic peaks and to accelerate their innovation process.

AWS Consulting and Technology Partners
We are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in France. Here’s a partial list:

AWS Premier Consulting PartnersAccenture, Capgemini, Claranet, CloudReach, DXC, and Edifixio.

AWS Consulting PartnersABC Systemes, Atos International SAS, CoreExpert, Cycloid, Devoteam, LINKBYNET, Oxalide, Ozones, Scaleo Information Systems, and Sopra Steria.

AWS Technology PartnersAxway, Commerce Guys, MicroStrategy, Sage, Software AG, Splunk, Tibco, and Zerolight.

AWS in France
We have been investing in Europe, with a focus on France, for the last 11 years. We have also been developing documentation and training programs to help our customers to improve their skills and to accelerate their journey to the AWS Cloud.

As part of our commitment to AWS customers in France, we plan to train more than 25,000 people in the coming years, helping them develop highly sought after cloud skills. They will have access to AWS training resources in France via AWS Academy, AWSome days, AWS Educate, and webinars, all delivered in French by AWS Technical Trainers and AWS Certified Trainers.

Use it Today
The EU (Paris) Region is open for business now and you can start using it today!

Jeff;

 

Categories: Cloud

New – Amazon CloudWatch Agent with AWS Systems Manager Integration – Unified Metrics & Log Collection for Linux & Windows

AWS Blog - Thu, 12/14/2017 - 13:45

In the past I’ve talked about several agents, deaemons, and scripts that you could use to collect system metrics and log files for your Windows and Linux instances and on-premise services and publish them to Amazon CloudWatch. The data collected by this somewhat disparate collection of tools gave you visibility into the status and behavior of your compute resources, along with the power to take action when a value goes out of range and indicates a potential issue. You can graph any desired metrics on CloudWatch Dashboards, initiate actions via CloudWatch Alarms, and search CloudWatch Logs to find error messages, while taking advantage of our support for custom high-resolution metrics.

New Unified Agent
Today we are taking a nice step forward and launching a new, unified CloudWatch Agent. It runs in the cloud and on-premises, on Linux and Windows instances and servers, and handles metrics and log files. You can deploy it using AWS Systems Manager (SSM) Run Command, SSM State Manager, or from the CLI. Here are some of the most important features:

Single Agent – A single agent now collects both metrics and logs. This simplifies the setup process and reduces complexity.

Cross-Platform / Cross-Environment – The new agent runs in the cloud and on-premises, on 64-bit Linux and 64-bit Windows, and includes HTTP proxy server support.

Configurable – The new agent captures the most useful system metrics automatically. It can be configured to collect hundreds of others, including fine-grained metrics on sub-resources such as CPU threads, mounted filesystems, and network interfaces.

CloudWatch-Friendly – The new agent supports standard 1-minute metrics and the newer 1-second high-resolution metrics. It automatically includes EC2 dimensions such as Instance Id, Image Id, and Auto Scaling Group Name, and also supports the use of custom dimensions. All of the dimensions can be used for custom aggregation across Auto Scaling Groups, applications, and so forth.

Migration – You can easily migrate existing AWS SSM and EC2Config configurations for use with the new agent.

Installing the Agent
The CloudWatch Agent uses an IAM role when running on an EC2 instance, and an IAM user when running on an on-premises server. The role or the user must include the AmazonSSMFullAccess and AmazonEC2ReadOnlyAccess policies. Here’s my role:

I can easily add it to a running instance (this is a relatively new and very handy EC2 feature):

The SSM Agent is already running on my instance. If it wasn’t, I would follow the steps in Installing and Configuring SSM Agent to set it up.

Next, I install the CloudWatch Agent using the AWS Systems Manager:

This takes just a few seconds. Now I can use a simple wizard to set up the configuration file for the agent:

The wizard also lets me set up the log files to be monitored:

The wizard generates a JSON-format config file and stores it on the instance. It also offers me the option to upload the file to my Parameter Store so that I can deploy it to my other instances (I can also do fine-grained customization of the metrics and log collection configuration by editing the file):

Now I can start the CloudWatch Agent using Run Command, supplying the name of my configuration in the Parameter Store:

This runs in a few seconds and the agent begins to publish metrics right away. As I mentioned earlier, the agent can publish fine-grained metrics on the resources inside of or attached to an instance. For example, here are the metrics for each filesystem:

There’s a separate log stream for each monitored log file on each instance:

I can view and search it, just like I can do for any other log stream:

Now Available
The new CloudWatch Agent is available now and you can start using it today in all public AWS Regions, with AWS GovCloud (US) and the Regions in China to follow.

There’s no charge for the agent; you pay the usual CloudWatch prices for logs and custom metrics.

Jeff;

Categories: Cloud

Amazon EC2 Price Reduction in the Asia Pacific (Mumbai) Region

AWS Blog - Wed, 12/13/2017 - 18:30

Whew – I am just getting back in to blogging after a quick recovery from AWS re:Invent!

I’m happy to start things off with yet another AWS price reduction, this one for four instance families in the Asia Pacific (Mumbai) Region. Effective December 1, 2017 we are reducing prices for On-Demand and Reserved Instances as follows:

  • M4 – Up to 15%.
  • T2 – Up to 15%.
  • R4 – Up to 15%.
  • C4 – Up to 10%.

The pricing pages have been updated. Enjoy!

Jeff;

 

Categories: Cloud

Pages

Subscribe to LAMP, Database and Cloud Technical Information aggregator - Cloud


Main menu 2

by Dr. Radut