Jump to Navigation

Feed aggregator

New – AWS Toolkits for PyCharm, IntelliJ (Preview), and Visual Studio Code (Preview)

AWS Blog - Thu, 11/29/2018 - 09:46

Software developers have their own preferred tools. Some use powerful editors, others Integrated Development Environments (IDEs) that are tailored for specific languages and platforms. In 2014 I created my first AWS Lambda function using the editor in the Lambda console. Now, you can choose from a rich set of tools to build and deploy serverless applications. For example, the editor in the Lambda console has been greatly enhanced last year when AWS Cloud9 was released. For .NET applications, you can use the AWS Toolkit for Visual Studio and AWS Tools for Visual Studio Team Services.

AWS Toolkits for PyCharm, IntelliJ, and Visual Studio Code

Today, we are announcing the general availability of the AWS Toolkit for PyCharm. We are also announcing the developer preview of the AWS Toolkits for IntelliJ and Visual Studio Code, which are under active development in GitHub. These open source toolkits will enable you to easily develop serverless applications, including a full create, step-through debug, and deploy experience in the IDE and language of your choice, be it Python, Java, Node.js, or .NET.

For example, using the AWS Toolkit for PyCharm you can:

These toolkits are distributed under the open source Apache License, Version 2.0.

Installation

Some features use the AWS Serverless Application Model (SAM) CLI. You can find installation instructions for your system here.

The AWS Toolkit for PyCharm is available via the IDEA Plugin Repository. To install it, in the Settings/Preferences dialog, click Plugins, search for “AWS Toolkit”, use the checkbox to enable it, and click the Install button. You will need to restart your IDE for the changes to take effect.

The AWS Toolkit for IntelliJ and Visual Studio Code are currently in developer preview and under active development. You are welcome to build and install these from the GitHub repositories:

Building a Serverless application with PyCharm

After installing AWS SAM CLI and AWS Toolkit, I create a new project in PyCharm and choose SAM on the left to create a serverless application using the AWS Serverless Application Model. I call my project hello-world in the Location field. Expanding More Settings, I choose which SAM template to use as the starting point for my project. For this walkthrough, I select the “AWS SAM Hello World”.

In PyCharm you can use credentials and profiles from your AWS Command Line Interface (CLI) configuration. You can change AWS region quickly if you have multiple environments.
The AWS Explorer shows Lambda functions and AWS CloudFormation stacks in the selected AWS region. Starting from a CloudFormation stack, you can see which Lambda functions are part of it.

The function handler is in the app.py file. After I open the file, I click on the Lambda icon on the left of the function declaration to have the option to run the function locally or start a local step-by-step debugging session.

First, I run the function locally. I can configure the payload of the event that is provided in input for the local invocation, starting from the event templates provided for most services, such as the Amazon API Gateway, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), and so on. You can use a file for the payload, or select the share checkbox to make it available to other team members. The function is executed locally, but here you can choose the credentials and the region to be used if the function is calling other AWS services, such as Amazon Simple Storage Service (S3) or Amazon DynamoDB.

A local container is used to emulate the Lambda execution environment. This function is implementing a basic web API, and I can check that the result is in the format expected by the API Gateway.

After that, I want to get more information on what my code is doing. I set a breakpoint and start a local debugging session. I use the same input event as before. Again, you can choose the credentials and region for the AWS services used by the function.

I step over the HTTP request in the code to inspect the response in the Variables tab. Here you have access to all local variables, including the event and the context provided in input to the function.

After that, I resume the program to reach the end of the debugging session.

Now I am confident enough to deploy the serverless application right-clicking on the project (or the SAM template file). I can create a new CloudFormation stack, or update an existing one. For now, I create a new stack called hello-world-prod. For example, you can have a stack for production, and one for testing. I select an S3 bucket in the region to store the package used for the deployment. If your template has parameters, here you can set up the values used by this deployment.

After a few minutes, the stack creation is complete and I can run the function in the cloud with a right-click in the AWS Explorer. Here there is also the option to jump to the source code of the function.

As expected, the result of the remote invocation is the same as the local execution. My serverless application is in production!

Using these toolkits, developers can test locally to find problems before deployment, change the code of their application or the resources they need in the SAM template, and update an existing stack, quickly iterating until they reach their goal. For example, they can add an S3 bucket to store images or documents, or a DynamoDB table to store your users, or change the permissions used by their functions.

I am really excited by how much faster and easier it is to build your ideas on AWS. Now you can use your preferred environment to accelerate even further. I look forward to seeing what you will do with these new tools!

Categories: Cloud

AWS Cloud Map: Easily create and maintain custom maps of your applications

AWS Blog - Wed, 11/28/2018 - 16:31

Companies are increasingly building their applications as microservices (many separate services that each do a single job). Microservices often allow companies to iterate and deploy more quickly. Many of these microservice-based modern applications are built using various types of cloud resources and deployed on dynamically changing infrastructure. Previously you had to use configuration files to manage the location of your application resource. However, dependencies in a microservices-based application can quickly become too complex to easily manage through configuration files. Additionally, many applications are built using containers that scale dynamically, reacting on the changes in traffic load. That increases your application responsiveness, but poses a new class of problem – now your application components need to discover and connect to the upstream services at runtime. This problem of connectivity in dynamically changing infrastructures and microservices is commonly addressed by service discovery.

Introducing AWS Cloud Map

 

AWS Cloud Map keeps track of all your application components, their locations, attributes and health status. Now your applications can simply query AWS Cloud Map using AWS SDK, API or even DNS to discover the locations of its dependencies. That allows your applications to scale dynamically and connect to upstream services directly, increasing the responsiveness of your applications.

When you register your web services and cloud resources in AWS Cloud Map, you can describe them using custom attributes, such as deployment stage and version. Your applications then can make discovery calls specifying the required deployment stage and version. AWS Cloud Map will return the locations of resources that match the supplied parameters. It simplifies your deployments and reduces the operational complexity for your applications.

Integrated health checking for IP-based resources, registered with AWS Cloud Map, automatically stops routing traffic to unhealthy endpoints. Additionally, you have APIs to describe the health status of your services, so that you can learn about potential issues with your infrastructure. That increases the resilience of your applications.

AWS Cloud Map in Action
Getting started with AWS Cloud Map is easy. You can use the AWS console or CLI to create a namespace, such as myapp.com . For this example, I’ll use the CLI. Let’s create a namespace:

aws servicediscovery create-public-dns-namespace --name myapp.com (http://myapp.com/)

At this point, you’ll need to decide whether your want your applications to discover resources only via the AWS SDK and API calls, or if you need optional discovery via DNS. When you enable DNS discovery for a namespace, you’ll need to provide IP addresses for all the resources that you register. If you plan to register other cloud resources, such as DynamoDB tables by ARN or the URLs of the APIs deployed on Amazon API Gateway, you need to select API discovery mode.

Once your namespace is created, it’s time to create services. A service represents your application components, such as users , auth, or payment and can be comprised of many dynamically changing resources. You can specify a friendly name for your service, then select the DNS discovery and health checking options. You can create a service like this:

aws servicediscovery create-service --name frontend --namespace-id %namespace_id%”

After you create a service, you can register service instances with custom attributes:

aws servicediscovery register-instance --service-id %service_id% --instance-id %id%
--attributes AWS_INSTANCE_IPV4=54.20.10.1,stage=beta,version=1.0,active=yes

aws servicediscovery register-instance --service-id %service_id% --instance-id %id%
--attributes AWS_INSTANCE_IPV4=54.20.10.2,stage=beta,version=2.0,active=no

Now, your applications can make API calls to discover the service instances, optionally providing query parameters to filter the results:

aws servicediscovery discover-instances --namespace-name myapp.com --service-name frontend --query-parameters version=1.0,active=yes
-->
{
"Instances": [
{
"InstanceId": "1",
"NamespaceName": "myapp.com",
"ServiceName": "users",
"HealthStatus": "HEALTHY",
"Attributes": {
"version":"1.0",
"active":"yes",
"stage":"beta",
"AWS_INSTANCE_IPV4": "54.20.10.2" }
}
]
}

And that’s it! Amazon Elastic Container Service (ECS) and AWS Fargate are tightly integrated with AWS Cloud Map. When you create your service and enable service discovery, all the task instances are automatically registered in AWS Cloud Map on scale up, and deregistered on scale down. ECS also ensures that only healthy task instances are returned on the discovery calls by publishing always up-to-date health information to AWS Cloud Map.

For Amazon Elastic Container Service for Kubernetes (EKS), you can automatically publish the external IPs of the services running in EKS in AWS Cloud Map. To do this, we’ve released an update to an open source project, ExternalDNS, to make Kubernetes resources discoverable via AWS Cloud Map. You can find out more details about Kubernetes External DNS here.

 

Now Generally Available
You can start building your applications with AWS Cloud Map and enjoy the integration with Amazon ECS and EKS, rich and secure API query interface, ubiquitous DNS name resolution and integrated health checking support today. Want to try it out? Head to https://console.aws.amazon.com/cloudmap/home.  To test out the integration with ECS, head to https://console.aws.amazon.com/ecs/home and enable Service Discovery to get started.

Categories: Cloud

New – Hibernate Your EC2 Instances

AWS Blog - Wed, 11/28/2018 - 16:02

As you know, you can easily build highly scalable AWS applications that launch fresh EC2 instances on an as-needed basis. While the instances can be up and running in a matter of seconds, booting the operating system and the application can take considerable time. Also, caches and other memory-centric application components can take some time (sometimes tens of minutes) to preload or warm up. Both of these factors impose a delay that can force you to over-provision in case you need incremental capacity very quickly.

Hibernation for EC2 Instances
Today we are giving you the ability to launch EC2 instances, set them up as desired, hibernate them, and then bring them back to life when you need them. The hibernation process stores the in-memory state of the instance, along with its private and elastic IP addresses, allowing it to pick up exactly where it left off.

This feature is available today and you can use it on freshly launched M3, M4, M5, C3, C4, C5, R3, R4, and R5 instances running Amazon Linux 1 (support for Amazon Linux 2 is in the works and will be ready soon). It applies to On-Demand instances and instances running with Reserved Instance coverage.

When an instance is instructed to hibernate, it writes the in-memory state to a file in the root EBS volume and then (in effect) shuts itself down. The AMI used to launch the instance must be encrypted, as must the root EBS volume of the instance. The encryption ensures proper protection for sensitive data when it is copied from memory to the EBS volume.

While the instance is in hibernation, you pay only for the EBS volumes and Elastic IP Addresses attached to it; there are no other hourly charges (just like any other stopped instance).

Hibernation in Action
In order to check out this feature I launch a c4.large instance, and select hibernation as a stop behavior:

I also expand my instance’s root volume, adding 10 GB + the memory size of the instance to the desired size:

I also create and associate an Elastic IP address with my instance since the public IP address will change. My instance is up and running, and I can check the uptime:

Then I select the instance in the EC2 Console and choose Stop – Hibernate from the Instance State menu (API and CLI support is also available):

The instance state transitions from running to stopping, and then to stopped, in seconds:

The console provides additional information about the transition:

The SSH connection to the instance drops, since it is no longer running:

Later, when I am ready to proceed, I click Start:

This time the state goes from stopped to pending, and then to running, again in seconds, and I can reconnect. I can then use uptime to see that the instance has not been rebooted, but has continued from where it left off:

If I was using this instance interactively, I could use a session manager such as screen, tmux, or mosh to make this totally seamless. The most interesting use cases for hibernation revolve around long-running processes and services that take a lot of time to initialize before they are ready to accept traffic where this would not be a concern.

Things to Know
As you can see, hibernation is really easy to use, and I hope that you are already thinking of some ways to apply it to your application. Here are a couple of things to keep in mind:

Instance Type – You can enable and use hibernation on freshly launches instances of the types that I listed above.

Root Volume Size – The root volume must have free space equal to the amount of RAM on the instance in order for the hibernation to succeed.

Operating Systems – The newest Amazon Linux 1 AMIs are configured for hibernation, with many others in the works. You will need to create an encrypted AMI, using one of these AMIs as a base. You can also follow our directions to customize and use your own AMI.

Modifications – You cannot modify the instance size or type while it is in hibernation, but you can modify the user data and the EBS Optimization setting.

Pricing – While the instance is in hibernation, you pay only for the EBS storage and any Elastic IP addresses attached to the instance.

Performance – The time to hibernate or resume is dependent on the memory size of the instance, the amount of in-memory data to be saved, and the throughput of the root EBS volume.

Coming Soon – We are working on support for Amazon Linux 2, Ubuntu, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server 2016, along with the SQL Server variants of the Windows AMIs.

Available Now
This feature is available now in the US East (N. Virginia, Ohio), US West (N. California, Oregon), Canada (Central), South America (São Paulo), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo), and EU (Frankfurt, London, Ireland, Paris) Regions.

Jeff;

Categories: Cloud

New AWS License Manager – Manage Software Licenses and Enforce Licensing Rules

AWS Blog - Wed, 11/28/2018 - 15:57

When you make use of commercial, licensed software in the AWS Cloud using a BYOL (Bring Your Own License) strategy, you need to make sure that you stay within the provisions of the license, while also avoiding expensive over-provisioning. This can be a challenge when it is so easy to launch instances on demand whenever you need them!

New AWS License Manager
Today we are launching AWS License Manager. You can define your licensing rules, taking in to account any enterprise agreements and other terms that govern your use of the licensed software. Then you associate them with your deployment mechanism (golden AMIs or Launch Templates) so that EC2 instances launched via the mechanism will be automatically tracked. You can also discover existing usage across one or more AWS accounts, and track all usage through the AWS Management Console.

Let’s take a quick tour, assuming that I own a 100-vCPU license for an enterprise database server.

The first step is to define one or more License Configurations. I open the License Manager Console and click Create license configuration to get started:

I enter a name and description for my configuration, indicate that the license is based on vCPUs (and limited to 100), and that I want to enforce the license:

I can also create rules for the license. The rules control the applicability of the license with respect to this configuration. I can specify a minimum and/or maximum number of vCPUs, and any desired EC2 tenancy (shared, dedicated host, or dedicated instance). Here’s a rule that specifies 4-64 vCPUs, and shared tenancy:

I confirm that the rule is defined as desired, and click Submit to move ahead. My license configuration is ready, as are some others created by colleagues:

After I create my license configuration, I can associate it with an AMI by selecting the configuration and clicking Associate AMI in the Actions menu. I pick one or more AMIs and click Associate:

I can see my overall license usage at a glance (this is a central dashboard that works across multiple accounts and in conjunction with AWS Organizations):

I can click Settings to link to my AWS Organizations accounts, set up a cross-account inventory search and arrange to receive SNS alerts when the usage limit for a license has been breached:

Going Further
Here are a couple of other things to know about AWS License Manager:

Supported License TypesAWS License Manager supports any license based on vCPUs, physical cores, and physical sockets, and is not tied to any software vendor.

Cross-Account UsageAWS License Manager works hand-in-glove with AWS Organizations. You can sign it to your Master account, link all of the accounts with a click, and share license configurations across your Organization. You will be able to use the dashboard to see an Organization-wide view of your license usage.

Multi-Account Software DiscoveryAWS License Manager also works with AWS Systems Manager, and works across accounts within an Organization. The discovered data is stored in an S3 bucket and an Amazon Athena database (encrypted in both places), and is processed by a AWS Glue job.

Programmatic Access – You can create and manage license configurations from the Console, APIs, or the AWS Command Line Interface (CLI). Interesting functions include CreateLicenseConfiguration, GetLicenseConfiguration, ListResourceInventory, and ListUsageForLicenseConfiguration.

Pricing – You can use AWS License Manager at no charge. Behind the scenes, AWS License Manager stores inventory data in an S3 bucket and an Amazon Athena database, and processes it using a AWS Glue job. You’ll pay the usual AWS prices for these resources and services.

Available Now
AWS License Manager is available now and you can start using it today in the US East (N. Virginia), US West (Oregon), US East (Ohio), Europe (Ireland), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Frankfurt), Asia Pacific (Seoul), Asia Pacific (Mumbai), and Europe (London) Regions.

Categories: Cloud

AWS Launches, Previews, and Pre-Announcements at re:Invent 2018 – Andy Jassy Keynote

AWS Blog - Wed, 11/28/2018 - 11:04

As promised in Welcome to AWS re:Invent 2018, here’s a summary of the launches, previews, and pre-announcements from Andy Jassy’s keynote. I have included links to allow you to sign up for previews, as appropriate.

(photo from AWS Community Hero Eric Hammond)

Launches
Here are the blog posts that we wrote for today’s launches:

S3 Glacier Deep Archive
This new storage class for Amazon Simple Storage Service (S3) is designed for long-term data archival and is the lowest cost storage from any cloud provider. Priced from just $0.00099/GB-mo (less than one-tenth of one cent, or $1.01 per TB-mo), the cost is comparable to tape archival services. Data can be retrieved in 12 hours or less, and there will also be a bulk retrieval option that will allow you to inexpensively retrieve even petabytes of data within 48 hours.

AWS Control Tower
This service helps you automate the set up a well-architected multi-account AWS environment using a set of blueprints that embody AWS best practices. Guardrails, both mandatory and recommended, are available for high-level, rule-based governance. You will have access to an integrated dashboard so that you can keep a watchful eye over the accounts provisioned, the guardrails that are enabled, and your overall compliance status. Learn more.

Amazon Textract
This Optical Character Recognition (OCR) service will help you to extract text and data from virtually any document. Powered by Machine Learning, it will identify bounding boxes, detect key-value pairs, and make sense of tables, while eliminating manual effort and lowering your document-processing costs. Sign up for the preview.

AWS Outposts
This service will bring AWS to your existing data center, providing a consistent, seamless experience across on-premises and the cloud, and giving you the ability to run on-premises applications with the exact same Application Programming Interfaces (APIs), consoles, features, hardware, and tools that you use on AWS. Sign up for the preview.

Amazon RDS on VMware
This is a fully managed service for on-premises databases. You can set up, run, and scale databases in VMware vSphere using the same tools already enjoyed by hundreds of thousands of Amazon Relational Database Service (RDS) customers. You can build low-cost high-availability hybrid environments, implement disaster recovery to AWS, and do long-term archival in Amazon Simple Storage Service (S3). Sign up for the preview!

Amazon Quantum Ledger Database
This fully managed ledger database will allow you to track and verify the complete history of changes to your application data. It uses an immutable journal that maintains a sequenced, cryptographically verifiable record of all changes that cannot be deleted or modified. It is scalable and easy to use, supports SQL queries, and lets it run 2-3x faster than common blockchain frameworks. Sign up for the preview.

AWS Managed Blockchain
This is a managed blockchain service that lets you quickly create and manage a scalable blockchain network using popular open source frameworks, Hyperledger Fabric and Ethereum, that you can use to transact and securely share data. It is designed to scale to meet the needs of thousands of applications generating millions of transactions, with simple mechanisms to invite new members, manage certificates, and track operational metrics. Sign up for the preview.

Amazon Timestream
This a fast, scalable, fully managed time-series database that you can use to store and analyze trillions of events per day at 1/10th the cost of a relational database. It is optimized for data that arrives in time order and for queries that include a time interval. It is a great fit for IoT, industrial telemetry, app monitoring, and DevOps data. Timestream automates rollups, retention, tiering, and compression so time-series data can be efficiently stored and processed. Timestream’s query engine adapts to the location and format of data making it easier and faster to query time-series data. Learn more.

AWS Lake Formation
This fully managed service will help you to build, secure, and manage a data lake. You’ll be able to point it at your data sources, have it crawl the sources, and pull the data into Amazon Simple Storage Service (S3). Lake Formation uses Machine Learning to identify and de-duplicate data, and also performs format changes in order to accelerate analytical processing. You will also be able to define and centrally manage consistent security policies across your data lake and the services that you use to analyze and process the data. Sign up for the preview.

AWS Security Hub
This service will allow you to to centrally view & manage security alerts and automate compliance checks within and across AWS accounts. It will aggregate security findings from AWS and partner services and present you with built-in and customizable insights that are unique to your environment. Try the preview!

Stay Tuned
I am looking forward to writing about each of these services when they are ready to launch, so stay tuned!

Jeff;

 

Categories: Cloud

Amazon SageMaker Neo – Train Your Machine Learning Models Once, Run Them Anywhere

AWS Blog - Wed, 11/28/2018 - 10:54

Machine learning (ML) is split in two distinct phases: training and inference. Training deals with building the model, i.e. running a ML algorithm on a dataset in order to identify meaningful patterns. This often requires large amounts of storage and computing power, making the cloud a natural place to train ML jobs with services such as Amazon SageMaker and the AWS Deep Learning AMIs.

Inference deals with using the model, i.e. predicting results for data samples that the model has never seen. Here, the requirements are different: developers are typically concerned with optimizing latency (how long does a single prediction take?) and throughput (how many predictions can I run in parallel?). Of course, the hardware architecture of your prediction environment has a very significant impact on such metrics, especially if you’re dealing with resource-constrained devices: as a Raspberry Pi enthusiast, I often wish the little fellow packed a little more punch to speed up my inference code.

Tuning a model for a specific hardware architecture is possible, but the lack of tooling makes this an error-prone and time-consuming process. Minor changes to the ML framework or the model itself usually require the user to start all over again. Unfortunately, this forces most ML developers to deploy the same model everywhere regardless of the underlying hardware, thus missing out on significant performance gains.

Well, no more. Today, I’m very happy to announce Amazon SageMaker Neo, a new capability of Amazon SageMaker that enables machine learning models to train once and run anywhere in the cloud and at the edge with optimal performance.

Introducing Amazon SageMaker Neo

Without any manual intervention, Amazon SageMaker Neo optimizes models deployed on Amazon EC2 instances, Amazon SageMaker endpoints and devices managed by AWS Greengrass.

Here are the supported configurations:

  • Frameworks and algorithms: TensorFlow, Apache MXNet, PyTorch, ONNX, and XGBoost.
  • Hardware architectures: ARM, Intel, and NVIDIA starting today, with support for Cadence, Qualcomm, and Xilinx hardware coming soon. In addition, Amazon SageMaker Neo is released as open source code under the Apache Software License, enabling hardware vendors to customize it for their processors and devices.

The Amazon SageMaker Neo compiler converts models into an efficient common format, which is executed on the device by a compact runtime that uses less than one-hundredth of the resources that a generic framework would traditionally consume. The Amazon SageMaker Neo runtime is optimized for the underlying hardware, using specific instruction sets that help speed up ML inference.

This has three main benefits:

  • Converted models perform at up to twice the speed, with no loss of accuracy.
  • Sophisticated models can now run on virtually any resource-limited device, unlocking innovative use cases like autonomous vehicles, automated video security, and anomaly detection in manufacturing.
  • Developers can run models on the target hardware without dependencies on the framework.

Under the hood

Most machine learning frameworks represent a model as a computational graph: a vertex represents an operation on data arrays (tensors) and an edge represents data dependencies between operations. The Amazon SageMaker Neo compiler exploits patterns in the computational graph to apply high-level optimizations including operator fusion, which fuses multiple small operations together; constant-folding, which statically pre-computes portions of the graph to save execution costs; a static memory planning pass, which pre-allocates memory to hold each intermediate tensor; and data layout transformations, which transform internal data layouts into hardware-friendly forms. The compiler then produces efficient code for each operator.

Once a model has been compiled, it can be run by the Amazon SageMaker Neo runtime. This runtime takes about 1MB of disk space, compared to the 500MB-1GB required by popular deep learning libraries. An application invokes a model by first loading the runtime, which then loads the model definition, model parameters, and precompiled operations.

I can’t wait to try this on my Raspberry Pi. Let’s get to work.

Downloading a pre-trained model

Plenty of pre-trained models are available in the Apache MXNet, Gluon CV or TensorFlow model zoos: here, I’m using a 50-layer model based on the ResNet architecture, pre-trained with Apache MXNet on the ImageNet dataset.

First, I’m downloading the 227MB model as well as the JSON file defining its different layers. This file is particularly important: it tells me that the input symbol is called ‘data’ and that its shape is [1, 3, 224, 224], i.e. 1 image, 3 channels (red, green and blue), 224×224 pixels. I’ll need to make sure that images passed to the model have this exact shape. The output shape is [1, 1000], i.e. a vector containing the probability for each one of the 1,000 classes present in the ImageNet dataset.

To define a performance baseline, I use this model and a vanilla unoptimized version of Apache MXNet 1.2 to predict a few images: on average, inference takes about 6.5 seconds and requires about 306 MB of RAM.

That’s pretty slow: let’s compile the model and see how fast it gets.

Compiling the model for the Raspberry Pi

First, let’s store both model files in a compressed TAR archive and upload it to an Amazon S3 bucket.

$ tar cvfz model.tar.gz resnet50_v1-symbol.json resnet50_v1-0000.params a resnet50_v1-symbol.json a resnet50_v1-0000.paramsresnet50_v1-0000.params $ aws s3 cp model.tar.gz s3://jsimon-neo/ upload: ./model.tar.gz to s3://jsimon-neo/model.tar.gz

Then, I just have to write a simple configuration file for my compilation job. If you’re curious about other frameworks and hardware targets, ‘aws sagemaker create-compilation-job help‘ will give you the exact syntax to use.

{ "CompilationJobName": "resnet50-mxnet-raspberrypi", "RoleArn": $SAGEMAKER_ROLE_ARN, "InputConfig": { "S3Uri": "s3://jsimon-neo/model.tar.gz", "DataInputConfig": "{\"data\": [1, 3, 224, 224]}", "Framework": "MXNET" }, "OutputConfig": { "S3OutputLocation": "s3://jsimon-neo/", "TargetDevice": "rasp3b" }, "StoppingCondition": { "MaxRuntimeInSeconds": 300 } }

Launching the compilation process takes a single command.

$ aws sagemaker create-compilation-job --cli-input-json file://job.json

Compilation is complete in seconds. Let’s figure out the name of the compilation artifact, fetch it from Amazon S3 and extract it locally

$ aws sagemaker describe-compilation-job \ --compilation-job-name resnet50-mxnet-raspberrypi \ --query "ModelArtifacts" { "S3ModelArtifacts": "s3://jsimon-neo/model-rasp3b.tar.gz" } $ aws s3 cp s3://jsimon-neo/model-rasp3b.tar.gz . $ tar xvfz model-rasp3b.tar.gz x compiled.params x compiled_model.json x compiled.so

As you can see, the artifact contains:

  • The original model and symbol files.
  • A shared object file storing compiled, hardware-optimized, operators used by the model.

For convenience, let’s rename them to ‘model.params’, ‘model.json’ and ‘model.so’, and then copy them on the Raspberry pi in a ‘resnet50’ directory.

$ mkdir resnet50 $ mv compiled.params resnet50/model.params $ mv compiled_model.json resnet50/model.json $ mv compiled.so resnet50/model.so $ scp -r resnet50 pi@raspberrypi.local:~

Setting up the inference environment on the Raspberry Pi

Before I can predict images with the model, I need to install the appropriate runtime on my Raspberry Pi. Pre-built packages are available: I just have to download the one for ‘armv7l’ architectures and to install it on my Pi with the provided script. Please note that I don’t need to install any additional deep learning framework (Apache MXNet in this case), saving up to 1GB of persistent storage.

$ scp -r dlr-1.0-py2.py3-armv7l pi@raspberrypi.local:~ <ssh to the Pi> $ cd dlr-1.0-py2.py3-armv7l $ sh ./install-py3.sh

We’re all set. Time to predict images!

Using the Amazon SageMaker Neo runtime

On the Pi, the runtime is available as a Python package named ‘dlr’ (deep learning runtime). Using it to predict images is what you would expect:

  • Load the model, defining its input and output symbols.
  • Load an image.
  • Predict!

Here’s the corresponding Python code.

import os import numpy as np from dlr import DLRModel # Load the compiled model input_shape = {'data': [1, 3, 224, 224]} # A single RGB 224x224 image output_shape = [1, 1000] # The probability for each one of the 1,000 classes device = 'cpu' # Go, Raspberry Pi, go! model = DLRModel('resnet50', input_shape, output_shape, device) # Load names for ImageNet classes synset_path = os.path.join(model_path, 'synset.txt') with open(synset_path, 'r') as f: synset = eval(f.read()) # Load an image stored as a numpy array image = np.load('dog.npy').astype(np.float32) print(image.shape) input_data = {'data': image} # Predict out = model.run(input_data) top1 = np.argmax(out[0]) prob = np.max(out) print("Class: %s, probability: %f" % (synset[top1], prob))

Let’s give it a try on this image. Aren’t chihuahuas and Raspberry Pis made for one another?

(1, 3, 224, 224) Class: Chihuahua, probability: 0.901803

The prediction is correct, but what about speed and memory consumption? Well, this prediction takes about 0.85 second and requires about 260MB of RAM: with Amazon SageMaker Neo, it’s now 5 times faster and 15% more RAM-efficient than with a vanilla model.

This impressive performance gain didn’t require any complex and time-consuming work: all we had to do was to compile the model. Of course, your mileage will vary depending on models and hardware architectures, but you should see significant improvements across the board, including on Amazon EC2 instances such as the C5 or P3 families.

Now available

I hope this post was informative. Compiling models with Amazon SageMaker Neo is free of charge, you will only pay for the underlying resource using the model (Amazon EC2 instances, Amazon SageMaker instances and devices managed by AWS Greengrass).

The service is generally available today in US-East (N. Virginia), US-West (Oregon) and Europe (Ireland). Please start exploring and let us know what you think. We can’t wait to see what you will build!

Julien;

Categories: Cloud

Amazon Forecast – Time Series Forecasting Made Easy

AWS Blog - Wed, 11/28/2018 - 10:31

The capacity to foresee the future would be an incredible superpower. At AWS, we can’t give you that, but we can help you use machine learning to forecast time series in a few steps.

The goal of time series forecasting is to predict future values of time-dependent data such as weekly sales, daily inventory levels, or hourly website traffic. Companies today use everything from simple spreadsheets to complex financial planning software to attempt to accurately forecast future business outcomes such as product demand, resource needs, or financial performance.

These tools build forecasts by looking at a historical series of data, which is called time series data. For example, such tools may try to predict the future sales of a raincoat by looking only at its previous sales data with the underlying assumption that the future is determined by the past.

This approach can struggle to produce accurate forecasts for large sets of data that have irregular trends. Also, it fails to easily combine data series that change over time (such as price, discounts, web traffic) with relevant independent variables like product features and store locations.

Introducing Amazon Forecast

Amazon has been solving time-series forecasting challenges across multiple areas including retail, supply chain, and server capacity for over two decades. Using machine learning techniques we have learned from this experience, today we are introducing Amazon Forecast, a fully managed deep learning service for time-series forecasting. Amazon Forecast packages our years of experience in building and operating scalable, highly accurate forecasting technology into an easy-to-use and fully-managed service.

You can use Amazon Forecast to generate predictions on time-series data to estimate:

  • Operational metrics, such as web traffic to servers, AWS usage, or IoT sensor metrics.
  • Business metrics, such as sales, profits, and expenses.
  • Resource requirements, such as the quantity of energy or bandwidth needed to meet a specific demand.
  • The amount of raw goods, services, or other inputs needed by a manufacturing process.
  • Retail demand considering the impact of price discounts, marketing promotions, and other campaigns.

Amazon Forecast is designed with these three main benefits in minds:

  • Accuracy, using deep neural nets and traditional statistical methods for forecasting. Amazon Forecast can learn from your data automatically and pick the best algorithms to train a model designed for your data. When you have many related time- series, forecasts made using the Amazon Forecast deep learning algorithms, such as DeepAR and MQ-RNN, tend to be more accurate than forecasts made with traditional methods, such as exponential smoothing.
  • End-to-end management, automating the entire forecasting workflow from data upload to data processing, model training, dataset updates, and forecasting. Enterprise systems can directly consume your forecasts as an API.
  • Usability, in the console you can look up and visualize forecasts for any time series at different granularities. You can also see metrics for the accuracy of your predictor’s forecasts. Developers with no machine learning expertise can use the Amazon Forecast APIs, AWS Command Line Interface (CLI), or the console to import training data into one or more Amazon Forecast datasets, train models, and deploy the models to generate forecasts.

Using Amazon Forecast

When creating forecasting projects in Amazon Forecast, you primarily work with the following resources:

  • Dataset, to upload your data. Amazon Forecast algorithms use the datasets to train models.
  • Dataset Group, a container for one or more datasets, to use multiple datasets for model training.
  • Predictor, a result of training models. To create a predictor you provide a dataset group and a recipe (which provides an algorithm) or let Amazon Forecast decide which forecasting model works best. The algorithm trains a model using the data in the datasets.
  • Forecast, using a predictor you can run inference to generate forecasts.

You can use Amazon Forecast with the AWS console, CLI and SDKs. For example, you can use the AWS SDK for Python to train a model or get a forecast in a Jupyter notebook, or the AWS SDK for Java to add forecasting capabilities to an existing business application.

Pricing and Availability

With Amazon Forecast, you pay only for what you use. There are three different types of costs in Amazon Forecast:

  • Generated forecast: A forecast is a prediction of future values for a single variable over any time horizon. Forecasts are billed in units of 1,000 (rounded up to the nearest thousand).
  • Data storage: Costs for each GB of data stored and used to train your models.
  • Training hours: Costs for each hour of training required for a custom model based on data provided by customers.

As part of the AWS Free Tier, for the first two months after first using Amazon Forecast, you have no charge for:

  • Generated forecasts: Up to 10K time series forecasts per month
  • Data storage: Up to 10GB per month
  • Training hours: Up to 10 hours per month

Amazon Forecast is available in preview in the following regions: US East (Northern Virginia), US West (Oregon).

It has never been so easy to do time-series forecasts with high accuracy. I really look forward to seeing what our customers are going to build with this!

Categories: Cloud

Amazon Personalize – Real-Time Personalization and Recommendation for Everyone

AWS Blog - Wed, 11/28/2018 - 10:26

Machine learning definitely offers a wide range of exciting topics to work on, but there’s nothing quite like personalization and recommendation.

At first glance, matching users to items that they may like sounds like a simple problem. However, the task of developing an efficient recommender system is challenging. Years ago, Netflix even ran a movie recommendation competition with a $1 Million award! Indeed, building, optimizing and deploying real-time personalization today requires specialized expertise in analytics, applied machine learning, software engineering, and systems operations. Few organizations have the knowledge, skills, and experience to overcome these challenges, and they either abandon the idea of using recommendation or build under-performing models.

For over 20 years, Amazon.com has built recommender systems at scale, integrating personalized recommendations across the buying experience – from product discovery to checkout.

To help all AWS customers do the same, we are very happy to announce Amazon Personalize, a fully-managed service that puts personalization and recommendation in the hands of developers with little machine learning experience.

Introducing Amazon Personalize

How does Amazon Personalize simplify personalization and recommendation? As explained in a previous blog post, you could already build recommendation models on Amazon SageMaker using algorithms such as Factorization Machines. However, it’s fair to say that this requires extensive data preparation and expert tuning in order to get good results.

Creating a recommendation model with Amazon Personalize is much simpler. Using AutoML, a new process that automates complex machine learning tasks, Personalize performs and accelerates the difficult work required to design, train, and deploy a machine learning model.

Amazon Personalize supports both datasets stored in Amazon S3 and streaming data sets, e.g. events sent in real-time from a JavaScript tracker or server-side. The high-level process looks like this:

  1. Create a schema describing the dataset, using Personalize-reserved keywords for user ids, item ids, etc.
  2. Create a dataset group that contains datasets used for building the model and for predicting: user-item interactions (aka “who liked what”), users and items. The last two are optional, as we will see in the example below.
  3. Send data to Personalize.
  4. Create a solution, i.e. select a recommendation recipe and train it on the dataset group.
  5. Create a campaign to predict new samples.

With data stored in Amazon S3, sending data to Personalize simply means adding your data files to the dataset group. Ingestion is triggered automatically.

Working with streaming data is different. One way to send events would be to use the AWS Amplify JavaScript library, which is integrated with the event tracking service in Personalize. Another way would be to send them server-side via the AWS SDK in your favourite language: ingestion can happen from any source with the code hosted inside of AWS (e.g. in Amazon EC2 or AWS Lambda) or outside.

Time for an example. Let’s build a solution based on the MovieLens dataset!

The MovieLens dataset

MovieLens is a well-known dataset storing movies recommendations. It comes in different sizes and formats: here, we will use ml-20m, which contains 20 million ratings applied to 27,000 movies by 138,000 users.

This dataset contains a file named ‘ratings.csv’ storing user-item interactions. The first lines look like this.

userId,movieId,rating,timestamp 1,2,3.5,1112486027 1,29,3.5,1112484676 1,32,3.5,1112484819 1,47,3.5,1112484727 1,50,3.5,1112484580

It reads like this: user 1 gave movie 2 a 3.5 rating. Same for movies 29, 32, 47, 50 and so on! This is exactly what we need to build a recommendation model. Let’s get to work.

Creating a schema for the dataset

The first step is to create an Avro schema for this dataset. This is pretty straightforward, we just need to use some of the keywords defined in Amazon Personalize.

{"type": "record", "name": "Interactions", "namespace": "com.amazonaws.personalize.schema", "fields":[ {"name": "ITEM_ID", "type": "string"}, {"name": "USER_ID", "type": "string"}, {"name": "TIMESTAMP", "type": "long"} ], "version": "1.0"}

Preparing the dataset

Once we’ve downloaded and unzipped the dataset, let’s load the ‘ratings.csv’ file and apply the following processing:

  • Shuffle reviews.
  • Keep only movies rated 4 and above, and drop the ratings columns: we just want our model to recommend movies that users should really like.
  • Rename columns to the names used in the schema.
  • Keep only 100,000 interactions to minimize training time (this is just a demo after all!).

All of this is easily achieved with the Pandas Python library, the Swiss Army knife for columnar data processing. While we’re at it, we’ll also upload the processed file to an Amazon S3 bucket.

import pandas, boto3 from sklearn.utils import shuffle ratings = pandas.read_csv('ratings.csv') ratings = shuffle(ratings) ratings = ratings[ratings['rating']>3.6] ratings = ratings.drop(columns='rating') ratings.columns = ['USER_ID','ITEM_ID','TIMESTAMP'] ratings = ratings[:100000] ratings.to_csv('ratings.processed.csv',index=False) s3 = boto3.client('s3') s3.upload_file('ratings.processed.csv','jsimon-ml20m','ratings.processed.csv')

Creating the dataset group

First, we need to create a dataset group containing the user-item dataset as well as its schema. Let’s do this with the AWS CLI: as you’ll see, a lot of these CLI operations require Amazon Resource Names (ARNs) output by a previous call, so make sure you keep track of everything when you experiment.

$ aws personalize create-dataset-group --name jsimon-ml20m-dataset-group $ aws personalize create-schema --name jsimon-ml20m-schema \ --schema file://jsimon-ml20m-schema.json $ aws personalize create-dataset --schema-arn $SCHEMA_ARN \ --dataset-group-arn $DATASET_GROUP_ARN \ --dataset-type INTERACTIONS

Importing datasets

In this simple example, we’ll import data on-demand. It’s also possible to schedule import jobs in order to load new data regularly. We need to pass a role allowing data to be read from the Amazon S3 bucket.

$ aws personalize create-dataset-import-job --job-name jsimon-ml20m-job \ --role-arn $ROLE_ARN --dataset-arn $DATASET_ARN \ --data-source dataLocation=s3://jsimon-ml20m/ratings.processed.csv

This will take a little while and we can use the describe-dataset-import-job API to check for completion. Plenty of information is returned, but let’s just query the import status.

$ aws personalize describe-dataset-import-job \ --dataset-import-job-arn $DATASET_IMPORT_JOB_ARN \ --query "datasetImportJob.latestDatasetImportJobRun.status" "CREATE IN_PROGRESS"

Putting it all together: creating a solution

Once datasets have been imported, we need to select a recipe to cook our recommendation model. A recipe is much more than an algorithm: it also includes predefined feature transformation, initial parameters for the algorithm as well as automatic model tuning. Thus, recipes remove the need to have expertise in personalization.

Amazon Personalize comes with several recipes suitable for different use cases, and advanced users can also add their own recipes.

Here’s the list of available recipes.

arn:aws:personalize:::recipe/awspersonalizehrnnmodel arn:aws:personalize:::recipe/awspersonalizehrnnmodel-for-coldstart arn:aws:personalize:::recipe/awspersonalizehrnnmodel-for-metadata arn:aws:personalize:::recipe/awspersonalizeffnnmodel arn:aws:personalize:::recipe/awspersonalizedeepfmmodel arn:aws:personalize:::recipe/awspersonalizesimsmodel arn:aws:personalize:::recipe/search-personalization arn:aws:personalize:::recipe/popularity-baseline

Recommendation experts will certainly enjoy the flexibility that they bring, but what about developers who are new to the topic?

As mentioned earlier, Amazon Personalize supports AutoML, a new technique that automatically searches for the most optimal recipe, so let’s enable it. Hyper parameter optimization is enabled by default. Last but not least, Amazon Personalize solutions can scale automatically according to incoming traffic: we simply need to define the minimum number to transactions per second (TPS) that we want to support.

Thus, we can create the solution like so:

$ aws personalize create-solution --name jsimon-ml20m-solution \
--minTPS 10 --perform-auto-ml \
--dataset-group-arn $DATASET_GROUP_ARN \
--query 'solution.status'
"CREATE IN_PROGRESS"

This will take a little while as the optimal recipe is selected, trained and tuned. Once all of this is complete, we can look at solution metrics.

$ aws personalize get-metrics --solution-arn $SOLUTION_ARN

Recommending new items in real-time

If we’re happy with the model, we can now create a campaign in order to deploy it. It will be updated automatically every time the solution is deployed.

$ aws personalize create-campaign --name jsimon-ml20m-solution \ --solution-arn $SOLUTION_ARN --update-mode AUTO

Now, let’s recommend some movies.

$ aws personalize-rec get-recommendations --campaign-arn $CAMPAIGN_ARN \ --user-id $USER_ID --query "itemList[*].itemId" ["1210", "260", "2571", "110", "296", "1193", ...]

That’s it! As you can see, we successfully built a recommendation model with a few API calls. All we had to do was define a schema and upload the dataset. We relied on Amazon Personalize to select the best recipe with AutoML, and to optimize its hyper parameters. The solution was trained and deployed on fully-managed infrastructure, letting us focus even more on building our application.

Sign up for the preview now!

I hope this post was informative. We just scratched the surface of what Amazon Personalize can do. The service is available in preview in US-East (Virginia) and US-West (Oregon).

There is no charge for the service during the preview. Once the preview is complete, the service will be part of the AWS free tier. For the first two months after sign-up, you will be offered:
1. Data processing and storage: Up to 20 GB per month
2. Training: Up to 100 training hours per month
3. Inference: Up to 50 TPS-hours of real-time recommendations per month

To get started, visit aws.amazon.com/personalize/. Now it’s your turn to try it and let us know what you think.

Julien;

Categories: Cloud

AWS DeepRacer – Go Hands-On with Reinforcement Learning at re:Invent

AWS Blog - Wed, 11/28/2018 - 10:11

Reinforcement Learning is a type of machine learning that works when an “agent” is allowed to act on a trial-and-error basis within an interactive environment, using feedback from those actions to learn over time in order to reach a predetermined goal or to maximize some type of score or reward. This stands in contrast to other forms of machine learning such as Supervised Learning, where a set of facts (ground truths) are used to train a model so that it can make inferences.

We want you to get some hands-on experience with Reinforcement Learning at AWS re:Invent and I would like to tell you all about it today. This combination of hardware and software will help you get things (literally) moving!

AWS DeepRacer
Let’s talk about the hardware and software first. AWS DeepRacer is a 1/18th scale radio-controlled, four-wheel drive car:

There’s an Intel Atom® processor onboard, a 4 megapixel camera with 1080p resolution, fast (802.11ac) WiFi, multiple USB ports, and enough battery power to last for about 2 hours. The Atom processor runs Ubuntu 16.04 LTS, ROS (Robot Operating System), and the Intel OpenVino computer vision toolkit.

AWS DeepRacer includes a fully-configured cloud environment that you can use to train your Reinforcement Learning models. It takes advantage of the new Reinforcement Learning feature in Amazon SageMaker and also includes a 3D simulation environment powered by AWS RoboMaker. You can train an autonomous driving model against a collection of predefined race tracks included with the simulator and then evaluate them virtually or download them to a AWS DeepRacer car and verify performance in the real world.

Reinforcement Learning is one of the technologies that are used to make self-driving cars a reality; the AWS DeepRacer is the perfect vehicle (so to speak) for you to go hands-on and learn all about it. We’re ramping up volume production and you will be able to buy one of your very own very soon.

You can pre-order your very own AWS DeepRacer today and sign up to be part of the preview at aws.amazon.com/deepracer.

AWS DeepRacer & Reinforcement Learning at re:Invent
My colleagues have created an incredible program that will get you started with AWS DeepRacer and Reinforcement Learning!

re:Invent attendees can attend a workshop that will teach you the fundamentals of Reinforcement Learning and then show you how to create, train, and tweak an autonomous driving model for an AWS DeepRacer. You’ll create, train, and refine your model on an online simulator and then load it into a genuine AWS DeepRacer for a spin around one of our test tracks. Your goal: Get your AWS DeepRacer around the track as quickly and accurately as possible. There will be a competition every hour, with the chance to win AWS DeepRacers and AWS credits.

Start Your Engines
If you’re here at re:Invent consider yourselves under starters’ orders, because the very first AWS DeepRacer League will take place over the next 24 hours in the AWS DeepRacer workshops and at the MGM Speedway. You will use Amazon SageMaker, AWS RoboMaker, and other AWS services while you learn about Reinforcement Learning. There are 6 main tracks (and a pit area for each), a hacker garage, 2 extra tracks that you can use for training and experimentation, and a DJ to keep you revved up.

From 11:30 AM to 10 PM today (November 28th) every lap time will be entered onto the Speedway Leaderboard. The top 3 developers with the fastest times over the course of the day’s racing will advance to the 2018 grand finale where they will compete to become the AWS DeepRacer 2018 Champion.

The final race will take place on the AWS re:Invent International Speedway at 8 AM on Thursday, just before Werner’s keynote. You will get to race, learn, win prizes, and collect some swag!

AWS DeepRacer League
We want to make sure that developers all over the world have the same opportunity to get involved with AWS DeepRacer as re:Invent attendees. To that end I am excited to announce the AWS DeepRacer League – the world’s first global autonomous racing league, open to anyone. In 2019 there will be a series of live racing events at AWS Global Summits around the world, and we’ll also have virtual events and tournaments throughout the year. Winners and top scorers will advance to the AWS DeepRacer 2019 Championship Cup at re:invent 2019. I’ll have more detail on that soon, or you can check the AWS DeepRacer site for the latest updates.

I’ll have more details soon, so stay tuned and happy racing!

 

 

Jeff;

Categories: Cloud

Amazon SageMaker RL – Managed Reinforcement Learning with Amazon SageMaker

AWS Blog - Wed, 11/28/2018 - 10:07

In the last few years, machine learning (ML) has generated a lot of excitement. Indeed, from medical image analysis to self-driving trucks, the list of complex tasks that ML models can successfully accomplish keeps growing, but what makes these models so smart?

In a nutshell, you can train a model in several different ways of which these are three:

  1. Supervised learning: run an algorithm on a labelled data set, i.e. a data set containing samples and answers. Gradually, the model will learn how to correctly predict the right answer. Regression and classification are examples of supervised learning.
  2. Unsupervised learning: run an algorithm on an unlabelled data set, i.e. a data set containing samples only. Here, the model will progressively learn patterns in data and organize samples accordingly. Clustering and topic modeling are examples of unsupervised learning.
  3. Reinforcement learning: this one is quite different. Here, a computer program (aka an agent) interacts with its environment: most of the time, this takes place in a simulator. The agent receives a positive or negative reward for actions that it takes: rewards are computed by a user-defined function which outputs a numeric representation of the actions that should be incentivized. By trying to maximize positive rewards, the agent learns an optimal strategy for decision making.

Launched at AWS re:Invent 2017, Amazon SageMaker is helping customers quickly build, train and deploy ML models. Today, with the launch of Amazon SageMaker RL, we’re happy to extend the advantages of Amazon SageMaker to reinforcement learning, making it easier for all developers and data scientists regardless of their ML expertise.

A quick primer on reinforcement learning

Reinforcement learning (RL) can sound very confusing at first, so let’s take an example. Imagine an agent learning to navigate a maze. The simulator allows it to move in certain directions but blocks it from going through walls: using RL to learn a policy, the agent soon starts to take increasingly relevant actions.

One critical thing to understand is that the RL model isn’t trained on a predefined set of labelled mazes (that would be supervised learning). Instead, the agent discovers its environment (the current maze) one step at at time, moves one more step and receives a reward: stepping into a dead end is a negative reward, moving one step closer to the exit is a positive reward. Once a number of different mazes have been processed, the agent learns the action/reward data points and trains a model to make better decisions next time around. This cycle of exploring and training is central to RL: given enough mazes and enough training time, we would soon enough know how to navigate any maze.

RL is particularly suitable for complex, unpredictable, environments that can be simulated and where building a prior dataset would either be infeasible or prohibitively expensive: autonomous vehicles, games, portfolio management, inventory management, robotics or industrial control systems. For instance, researchers have shown that applying RL-based control to HVAC systems can result in 20% – 40% cost savings compared to typical rule-based systems [1], not to mention the large reduction in ecological footprint.

Introducing Amazon SageMaker RL

Amazon SageMaker RL builds on top of Amazon SageMaker, adding pre-packaged RL toolkits and making it easy to integrate any simulation environment. As you would expect, training and prediction infrastructure is fully managed, so that you can focus on your RL problem and not on managing servers.

Today, you can use containers provided by SageMaker for Apache MXNet and Tensorflow that include Open AI Gym, Intel Coach and Berkeley Ray RLLib. As usual with Amazon SageMaker, you can easily create your own custom environment using other RL libraries such as TensorForce or StableBaselines.

When it comes to simulation environments, Amazon SageMaker RL supports the following options:

  • First party simulators for AWS RoboMaker and Amazon Sumerian.
  • Open AI Gym environments and open source simulation environments that are developed using Gym interfaces, such as Roboschool or EnergyPlus.
  • Customer-developed simulation environments using the Gym interface.
  • Commercial simulators such as MATLAB and Simulink (customers will need to manage their own licenses).

Amazon SageMaker RL also comes with a collection of Jupyter notebooks, just like Amazon SageMaker does. They are available on Github, featuring both simple examples (cartpole, simple corridor) as well as advanced ones in a variety of domains such as robotics, operations research, finance, and more. You can easily extend these notebooks and customize them for your own business problem.

In addition, you’ll find examples showing you how to scale RL using either homogeneous or heterogeneous scaling. The latter is particularly important for many RL applications where simulation runs on CPUs and training on GPUs. Your simulation environment can also run locally or remotely in a different network and SageMaker will set everything up for you.

Don’t worry, this is easier than it seems. Let’s look at an example.

Predictive Auto Scaling with Amazon SageMaker RL

Auto Scaling allows you to dynamically scale your service (such as Amazon EC2), adding or removing capacity automatically according to conditions you define. Today, this typically requires setting up thresholds, alarms, scaling policies, etc.

Let’s see how we could optimize this process with a RL model and a custom simulator, pretending to scale your Amazon EC2 capacity (of course, this is just a toy example). For the sake of brevity, I will only highlight the most important code snippets: you’ll find the complete example on Github.

Here, the name of the game is to adapt the instance capacity to the load profile. We don’t want to be under-provisioned (losing traffic) or over-provisioned (wasting money): we want to be ‘just right’.

In RL terms:

  • The environment contains the load profile and the number of running instances.
  • At each step, the agent can take two actions: add instances and remove instances. Adding instances helps process more transactions, but they cost money and need a few minutes to come online. Removing instances saves money but reduces the overall processing capacity.
  • The reward is a combination of the cost for running instances and the value for completing successful transactions, with a big penalty for insufficient capacity.

Setting up the simulation

First, we need a simulator in order to generate load profiles similar to what you would observe on a high-traffic web server: let’s use a very simple Python program for that. Here’s an example plotting transactions per minute (tpm) over a 3-day period: mostly periodic with sharp unpredictable spikes.

This is the initial state:

config_defaults = { "warmup_latency": 5, # It takes 5 minutes for a new machine to warm up and become available. "tpm_per_machine": 300, # Each machine can process 300 transactions per minute (tpm) on average "tpm_sigma": 30, # Machine's TPM capacity is variable with +/- 30 standard deviation "machine_cost": 0.05, # Machines cost $0.05/min "transaction_val": 0.90, # Successful transactions are worth $0.90 per thousand (CPM) "downtime_cost": 200, # Downtime is assumed to cost the business $200/min beyond incomplete transactions "downtime_percent": 99.5, # Downtime is defined as availability dropping below 99.5% "initial_machines": 50, # How many machines are initially turned on "max_time_steps": 1000, # Maximum number of timesteps per episode }

Computing the reward

This is quite straightforward! The current load is compared to the current capacity, we deduct the cost of any lost transaction and we apply a large penalty for losing more than 0.5% (a pretty strict definition of downtime!).

def _react_to_load(self): self.capacity = int(self.active_machines * np.random.normal(self.tpm_per_machine, self.tpm_sigma)) if self.current_load <= self.capacity: # All transactions succeed self.failed = 0 succeeded = self.current_load else: # Some transactions failed self.failed = self.current_load - self.capacity succeeded = self.capacity reward = succeeded * self.transaction_val / 1000.0 # divide by thousand for CPM percent_success = 100.0 * succeeded / (self.current_load + 1e-20) if percent_success < self.downtime_percent: self.is_down = 1 reward -= self.downtime_cost else: self.is_down = 0 reward -= self.active_machines * self.machine_cost return reward

Stepping through the simulation

Here’s how the agent goes through each time step initiated by the RL framework. As explained above, the model will initially predict random actions, but after a few training rounds, it’ll get much smarter.

def step(self, action): # First, react to the actions and adjust the fleet turn_on_machines = int(action[0]) turn_off_machines = int(action[1]) self.active_machines = max(0, self.active_machines - turn_off_machines) warmed_up_machines = self.warmup_queue[0] self.active_machines = min(self.active_machines + warmed_up_machines, self.max_machines) self.warmup_queue = self.warmup_queue[1:] + [turn_on_machines] # Now react to the current load and calculate reward self.current_load = self.load_simulator.time_step_load() reward = self._react_to_load() self.t += 1 done = self.t > self.max_time_steps return self._observation(), reward, done, {}

Training on Amazon SageMaker

Now, we’re ready to train our model, just like any other SageMaker model: passing the image name (here, the TensorFlow container for Intel Coach), the instance type, etc.

rlestimator = RLEstimator(role=role, framework=Framework.TENSORFLOW, framework_version='1.11.0', toolkit=Toolkit.COACH, entry_point="train-autoscale.py", train_instance_count=1, train_instance_type=p3.2xlarge) rlestimator.fit()

In the training log, we see that the agent first explores its environment without any training: this is called the heatup phase and it’s used to generate an initial dataset to learn from.

## simple_rl_graph: Starting heatup Heatup> Name=main_level/agent, Worker=0, Episode=1, Total reward=-39771.13, Steps=1001, Training iteration=0 Heatup> Name=main_level/agent, Worker=0, Episode=2, Total reward=-3089.54, Steps=2002, Training iteration=0 Heatup> Name=main_level/agent, Worker=0, Episode=3, Total reward=-43205.29, Steps=3003, Training iteration=0 Heatup> Name=main_level/agent, Worker=0, Episode=4, Total reward=-24542.07, Steps=4004, Training iteration=0 ...

Once the heatup phase is complete, the model goes through repeated cycles of learning (aka ‘policy training’) and exploration based on what it has learned (aka ‘training’).

Policy training> Surrogate loss=-0.09095033258199692, KL divergence=0.0003891458618454635, Entropy=2.8382163047790527, training epoch=0, learning_rate=0.0003 Policy training> Surrogate loss=-0.1263471096754074, KL divergence=0.00145535240881145, Entropy=2.836780071258545, training epoch=1, learning_rate=0.0003 Policy training> Surrogate loss=-0.12835979461669922, KL divergence=0.0022696126252412796, Entropy=2.835214376449585, training epoch=2, learning_rate=0.0003 Policy training> Surrogate loss=-0.12992703914642334, KL divergence=0.00254297093488276, Entropy=2.8339898586273193, training epoch=3, learning_rate=0.0003 .... Training> Name=main_level/agent, Worker=0, Episode=152, Total reward=-54843.29, Steps=152152, Training iteration=1 Training> Name=main_level/agent, Worker=0, Episode=153, Total reward=-51277.82, Steps=153153, Training iteration=1 Training> Name=main_level/agent, Worker=0, Episode=154, Total reward=-26061.17, Steps=154154, Training iteration=1

Once the model hits the number of epochs that we set, training is complete. In this case, we trained for 18 minutes: let’s see how well our model learned.

Visualizing training

One way to find out is to plot the rewards received by the agent after each exploration iteration. As expected, rewards in the heatup phase (150 iterations) are extremely negative because the agent hasn’t been trained at all. Then, as soon as training is applied, rewards start to improve rapidly.

Here’s a zoom on post-heatup iterations. As you can see, about halfway through, the agent starts receiving pretty consistent positive rewards, showing that it’s able to apply efficient scaling to the load profiles that it discovers.

Deploying the model

If we’re happy with the model, we can then deploy it just like any SageMaker model and use the newly-created HTTPS endpoint to predict. Alternatively, if you are training a robot then you can also deploy on Edge devices using AWS Greengrass.

Now available

I hope this post was informative. We’ve barely scratched the surface of what Amazon SageMaker RL can do. You can use it today in all regions where Amazon SageMaker is available. Please start exploring and let us know what you think. We can’t wait to see what you will build!

Julien;

[1] “Deep Reinforcement Learning for Building HVAC Control”, T. Wei, Y. Wang and Q. Zhu, DAC’17, June 18-22, 2017, Austin, TX, USA.

Categories: Cloud

NEW – Machine Learning algorithms and model packages now available in AWS Marketplace

AWS Blog - Wed, 11/28/2018 - 10:02

At AWS, our mission is to put machine learning in the hands of every developer. That’s why in 2017 we launched Amazon SageMaker. Since then it has become one of the fastest growing services in AWS history, used by thousands of customers globally. Customers using Amazon SageMaker can use optimized algorithms offered in Amazon SageMaker, to run fully-managed MXNet, TensorFlow, PyTorch, and Chainer algorithms, or bring their own algorithms and models. When it comes to building their own machine learning model, many customers spend significant time developing algorithms and models that are solutions to problems that have already been solved.

 

Introducing Machine Learning in AWS Marketplace

I am pleased to announce the new Machine Learning category of products offered by AWS Marketplace, which includes over 150+ algorithms and model packages, with more coming every day. AWS Marketplace offers a tailored selection for vertical industries like retail (35 products), media (19 products), manufacturing (17 products), HCLS (15 products), and more. Customers can find solutions to critical use cases like breast cancer prediction, lymphoma classifications, hospital readmissions, loan risk prediction, vehicle recognition, retail localizer, botnet attack detection, automotive telematics, motion detection, demand forecasting, and speech recognition.

Customers can search and browse a list of algorithms and model packages in AWS Marketplace. Once customers have subscribed to a machine learning solution, they can deploy it directly from the SageMaker console, a Jupyter Notebook, the SageMaker SDK, or the AWS CLI. Amazon SageMaker protects buyers data by employing security measures such as static scans, network isolation, and runtime monitoring.

The intellectual property of sellers on the AWS Marketplace is protected by encrypting the algorithms and model package artifacts in transit and at rest, using secure (SSL) connections for communications, and ensuring role based access for deployment of artifacts. AWS provides a secure way for the sellers to monetize their work with a frictionless self-service process to publish their algorithms and model packages.

 

Machine Learning category in Action

Having tried to build my own models in the past, I sure am excited about this feature. After browsing through the available algorithms and model packages from AWS Marketplace, I’ve decided to try the Deep Vision vehicle recognition model, published by Deep Vision AI. This model will allow us to identify the make, model and type of car from a set of uploaded images. You could use this model for insurance claims, online car sales, and vehicle identification in your business.

I continue to subscribe and accept the default options of recommended instance type and region. I read and accept the subscription contract, and I am ready to get started with our model.

My subscription is listed in the Amazon SageMaker console and is ready to use. Deploying the model with Amazon SageMaker is the same as any other model package, I complete the steps in this guide to create and deploy our endpoint.

With our endpoint deployed I can start asking the model questions. In this case I will be using a single image of a car; the model is trained to detect the model, maker, and year information from any angle. First, I will start off with a Volvo XC70 and see what results I get:

Results:

{'result': [{'mmy': {'make': 'Volvo', 'score': 0.97, 'model': 'Xc70', 'year': '2016-2016'}, 'bbox': {'top': 146, 'left': 50, 'right': 1596, 'bottom': 813}, 'View': 'Front Left View'}]}

My model has detected the make, model and year correctly for the supplied image. I was recently on holiday in the UK and stayed with a relative who had a McLaren 570s supercar. The thought that crossed my mind as the gulf-wing doors opened for the first time and I was about to be sitting in the car, was how much it would cost for the insurance excess if things went wrong! Quite apt for our use case today.

Results:

{'result': [{'mmy': {'make': 'Mclaren', 'score': 0.95, 'model': '570S', 'year': '2016-2017'}, 'bbox': {'top': 195, 'left': 126, 'right': 757, 'bottom': 494}, 'View': 'Front Right View'}]}

The score (0.95) measures how confident the model is that the result is right. The range of the score is 0.0 to 1.0. My score is extremely accurate for the McLaren car, with the make, model and year all correct. Impressive results for a relatively rare type of car on the road. I test a few more cars given to me by the launch team who are excitedly looking over my shoulder and now it’s time to wrap up.

Within ten minutes, I have been able to choose a model package, deploy an endpoint and accurately detect the make, model and year of vehicles, with no data scientists, expensive GPU’s for training or writing any code. You can be sure I will be subscribing to a whole lot more of these models from AWS Marketplace throughout re:Invent week and trying to solve other use cases in less than 15 minutes!

Access for the machine learning category in AWS Marketplace can be achieved through the Amazon SageMaker console, or directly through AWS Marketplace itself. Once an algorithm or model has been successfully subscribed to, it is accessible via the console, SDK, and AWS CLI. Algorithms and models from the AWS Marketplace can be deployed just like any other model or algorithm, by selecting the AWS Marketplace option as your package source. Once you have chosen an algorithm or model, you can deploy it to Amazon SageMaker by following this guide.

 

Availability & Pricing

Customers pay a subscription fee for the use of an algorithm or model package and the AWS resource fee. AWS Marketplace provides a consolidated monthly bill for all purchased subscriptions.

At launch, AWS Marketplace for Machine Learning includes algorithms and models from Deep Vision AI Inc, Knowledgent, RocketML, Sensifai, Cloudwick Technologies, Persistent Systems, Modjoul, H2Oai Inc, Figure Eight [Crowdflower], Intel Corporation, AWS Gluon Model Zoos, and more with new sellers being added regularly. If you are interested in selling machine learning algorithms and model packages, please reach out to aws-mp-bd-ml@amazon.com.

 

 

Categories: Cloud

Amazon SageMaker Ground Truth – Build Highly Accurate Datasets and Reduce Labeling Costs by up to 70%

AWS Blog - Wed, 11/28/2018 - 09:59

In 1959, Arthur Samuel defined machine learning as a “field of study that gives computers the ability to learn without being explicitly programmed”. However, there is no deus ex machina: the learning process requires an algorithm (“how to learn”) and a training dataset (“what to learn from”).

Today, most machine learning tasks use a technique called supervised learning: an algorithm learns patterns or behaviours from a labeled dataset. A labeled dataset containing data samples as well as the correct answer for each one of them, aka ‘ground truth’. Depending on the problem at hand, one could use labeled images (“this is a dog”, “this is a cat”), labeled text (“this is spam”, “this isn’t”), etc.

Fortunately, developers and data scientists can now rely on a vast collection of off-the-shelf algorithms (as illustrated by the built-in algorithms in Amazon SageMaker) and of reference datasets. Deep learning has popularized image datasets such as MNIST, CIFAR-10 or ImageNet, and more are also available for tasks like machine translation or text classification. These reference datasets are extremely useful for beginners and experienced practitioners alike, but a lot of companies and organizations still need to train machine learning models on their own dataset: think about medical imaging, autonomous driving, etc.

Building such datasets is a complex problem, particularly when working at scale. How long would it take one person to label one thousand images or documents? ‘Quite some time’ is probably the answer! Now imagine having to label one million images or documents: how many people would you now need? For most companies and organizations, this is a moot point, as they would never be able to muster enough people anyway.

Well, no more! Today, I’m very happy to announce Amazon SageMaker Ground Truth, a new capability of Amazon SageMaker that makes it easy for customers to to efficiently and accurately label the datasets required for training machine learning systems.

Introducing Amazon SageMaker Ground Truth

Amazon SageMaker Ground Truth helps you build datasets for:

  • Text classification.
  • Image classification, i.e categorizing images in specific classes.
  • Object detection, i.e. locating objects in images with bounding boxes.
  • Semantic segmentation, i.e. locating objects in images with pixel-level precision.
  • Custom user-defined tasks.

Amazon SageMaker Ground Truth can optionally use active learning to automate the labeling of your input data. Active learning is a machine learning technique that identifies data that needs to be labeled by humans and data that can be labeled by machine. Automated data labeling incurs Amazon SageMaker training and inference costs, but it can help to reduce the cost (up to 70%) and time that it takes to label your dataset over having humans label your complete dataset.

When manual effort is required, you can choose to use a crowdsourced Amazon Mechanical Turk workforce of over 500,000 workers, a private workforce of your own workers, or one of the curated third party vendors listed on the AWS Marketplace.

Let’s look at the high-level steps required to label a dataset:

  • Store your data in Amazon S3,
  • Create a labeling workforce,
  • Create a labeling job,
  • Get to work,
  • Visualize results.

How about an example? Let me show you how to label images from the CBCL StreetScenes dataset. This dataset contains 3548 images such as this one. For the sake of brevity, I will only use the first 10 images and annotate cars only.

Storing data in Amazon S3

The first step is to create a manifest file for the dataset. This is a simple JSON file listing all images present in the dataset. Mine looks like this: please note that each line corresponds to a single object and is an independent JSON document.

{"source-ref": "s3://jsimon-groundtruth-demo/SSDB00001.JPG"} {"source-ref": "s3://jsimon-groundtruth-demo/SSDB00002.JPG"} {"source-ref": "s3://jsimon-groundtruth-demo/SSDB00003.JPG"} {"source-ref": "s3://jsimon-groundtruth-demo/SSDB00004.JPG"} {"source-ref": "s3://jsimon-groundtruth-demo/SSDB00005.JPG"} {"source-ref": "s3://jsimon-groundtruth-demo/SSDB00006.JPG"} {"source-ref": "s3://jsimon-groundtruth-demo/SSDB00007.JPG"} {"source-ref": "s3://jsimon-groundtruth-demo/SSDB00008.JPG"} {"source-ref": "s3://jsimon-groundtruth-demo/SSDB00009.JPG"} {"source-ref": "s3://jsimon-groundtruth-demo/SSDB00010.JPG"}

Then, I simply copy the manifest file and the corresponding images to an Amazon S3 bucket.

Creating a labeling workforce

Amazon SageMaker Ground Truth gives us different options:

  • Public workforce, backed by Amazon Mechanical Turk,
  • Private workforce, backed by internal resources,
  • Vendor workforce, backed by third-party resources.

The first option is probably the most scalable one. However, the last two may be a better fit if your job requires confidentiality, service guarantees, or special skills.

I can only count on myself here, so I create a private team authenticated by a new Amazon Cognito group. Indeed, authentication is required before any worker can access the dataset.

Then, I add myself to the team by entering my email address. A few seconds later, I receive an invitation containing credentials and a URL. This URL also can be found on the labeling workforces dashboard.

Once I’ve clicked on the link and changed my password, I am registered as a verified worker for this team.

The one-man team is now ready. It’s time to create the labeling job itself.

Creating a labeling job

As you would expect, I have to define the location of the manifest file and of the dataset.

Then, I can decide whether I want to use the full dataset or a subset: I could even write a SQL query to filter the files. Here, let’s use the full dataset, as it only has 10 images.

Next, I have to select the type of the labeling job. As stated earlier, there are multiple options available and here I’m interested in adding bounding boxes to my images.

Next, I select the team that I want to assign to the job. This is where I could select automated data labeling. I could also decide to ask multiple workers to label the same image to increase accuracy.

Finally, I can provide additional instructions to workers, detailing the specific task that needs to be performed and giving them a couple of examples.

That’s it. Our labeling job is now ready. Time for the team (well… me, really) to get to work.

Labeling images

Logging into the URL I received by email, I see the list of jobs I’m assigned to.

When I click on the ‘Start working’ button, I see instructions as well as a first image to work on. Using the toolbox, I can draw boxes, zoom in and out, etc. This is pretty intuitive, but drawing boxes that fit just right takes time and care. Now I understand why this is such a time-consuming process… and I have only ten images to go!

Here’s a zoom on another image. Can you see all seven cars?

Once I’m done with all ten images, I can take a well-deserved break and enjoy the completion of the labeling job.

Visualizing results

Annotated images are visible directly in the AWS console, which comes in handy for sanity checks. I can also click on any image and see the list of labels that have been applied.

Of course, our purpose is to use this information to train machine learning models: we can find it in the augmented manifest file stored in our bucket. For example, here’s what the manifest has to say about the first image, where I labeled five cars.

{ "source-ref": "s3://jsimon-groundtruth-demo/SSDB00001.JPG", "GroundTruthDemo": { "annotations": [ {"class_id": 0, "width": 54, "top": 482, "height": 39, "left": 337}, {"class_id": 0, "width": 69, "top": 495, "height": 53, "left": 461}, {"class_id": 0, "width": 52, "top": 482, "height": 41, "left": 523}, {"class_id": 0, "width": 71, "top": 481, "height": 62, "left": 589}, {"class_id": 0, "width": 347, "top": 479, "height": 120, "left": 573} ], "image_size": [{"width": 1280, "depth": 3, "height": 960} ] }, "GroundTruthDemo-metadata": { "job-name": "labeling-job/groundtruthdemo", "class-map": {"0": "Car"}, "human-annotated": "yes", "objects": [ {"confidence": 0.94}, {"confidence": 0.94}, {"confidence": 0.94}, {"confidence": 0.94}, {"confidence": 0.94} ], "creation-date": "2018-11-26T04:01:09.038134", "type": "groundtruth/object-detection" } }

This has all the information required to train an object detection model, such as the built-in Single-Shot Detector available in Amazon SageMaker, but this is another story!

Now available!

I hope this post was informative. We just scratched the surface of what Amazon SageMaker Ground Truth can do. The service is available today in US-East (Virginia), US-Central (Ohio), US-West (Oregon), Europe (Ireland) and Asia Pacific (Tokyo). Now it’s your turn to try it, and let us know what you think!

Julien;

Categories: Cloud

Amazon Elastic Inference – GPU-Powered Deep Learning Inference Acceleration

AWS Blog - Wed, 11/28/2018 - 09:38

One of the reasons for the recent progress of Artificial Intelligence and Deep Learning is the fantastic computing capabilities of Graphics Processing Units (GPU). About ten years ago, researchers learned how to harness their massive hardware parallelism for Machine Learning and High Performance Computing: curious minds will enjoy the seminal paper (PDF) published in 2009 by Stanford University.

Today, GPUs help developers and data scientists train complex models on massive data sets for medical image analysis or autonomous driving. For instance, the Amazon EC2 P3 family lets you use up to eight NVIDIA V100 GPUs in the same instance, for up to 1 PetaFLOP of mixed-precision performance: can you believe that 10 years ago this was the performance of the fastest supercomputer ever built?

Of course, training a model is half the story: what about inference, i.e. putting the model to work and predicting results for new data samples? Unfortunately, developers are often stumped when the time comes to pick an instance type and size. Indeed, for larger models, the inference latency of CPUs may not meet the needs of online applications, while the cost of a full-fledged GPU may not be justified. In addition, resources like RAM and CPU may be more important to the overall performance of your application than raw inference speed.

For example, let’s say your power-hungry application requires a c5.9xlarge instance ($1.53 per hour in us-east-1): a single inference call with an SSD model would take close to 400 milliseconds, which is certainly too slow for real-time interaction. Moving your application to a p2.xlarge instance (the most inexpensive general-purpose GPU instance at $0.90 per hour in us-east-1) would improve inference performance to 180 milliseconds: then again, this would impact application performance as p2.xlarge has less vCPUs and less RAM.

Well, no more compromising. Today, I’m very happy to announce Amazon Elastic Inference, a new service that lets you attach just the right amount of GPU-powered inference acceleration to any Amazon EC2 instance. This is also available for Amazon SageMaker notebook instances and endpoints, bringing acceleration to built-in algorithms and to deep learning environments.

Pick the best CPU instance type for your application, attach the right amount of GPU acceleration and get the best of both worlds! Of course, you can use EC2 Auto Scaling to add and remove accelerated instances whenever needed.

Introducing Amazon Elastic Inference

Amazon Elastic Inference supports popular machine learning frameworks TensorFlow, Apache MXNet and ONNX (applied via MXNet). Changes to your existing code are minimal, but you will need to use AWS-optimized builds which automatically detect accelerators attached to instances, ensure that only authorized access is allowed, and distribute computation across the local CPU resource and the attached accelerator. These builds are available in the AWS Deep Learning AMIs, on Amazon S3 so you can build it into your own image or container, and provided automatically when you use Amazon SageMaker.

Amazon Elastic Inference is available in three sizes, making it efficient for a wide range of inference models including computer vision, natural language processing, and speech recognition.

  • eia1.medium: 8 TeraFLOPs of mixed-precision performance.
  • eia1.large: 16 TeraFLOPs of mixed-precision performance.
  • eia1.xlarge: 32 TeraFLOPs of mixed-precision performance.

This lets you select the best price/performance ratio for your application. For instance, a c5.large instance configured with eia1.medium acceleration will cost you $0.22 an hour (us-east-1). This combination is only 10-15% slower than a p2.xlarge instance, which hosts a dedicated NVIDIA K80 GPU and costs $0.90 an hour (us-east-1). Bottom line: you get a 75% cost reduction for equivalent GPU performance, while picking the exact instance type that fits your application.

Let’s dive in and look at Apache MXNet and TensorFlow examples on an Amazon EC2 instance.

Setting up Amazon Elastic Inference

Here are the high-level steps required to use the service with an Amazon EC2 instance.

  1. Create a security group for the instance allowing only incoming SSH traffic.
  2. Create an IAM role for the instance, allowing it to connect to the Amazon Elastic Inference service.
  3. Create a VPC endpoint for Amazon Elastic Inference in the VPC where the instance will run, attaching a security group allowing only incoming HTTPS traffic from the instance. Please note that you’ll only have to do this once per VPC and that charges for the endpoint are included in the cost of the accelerator.

Creating an accelerated instance

Now that the endpoint is available, let’s use the AWS CLI to fire up a c5.large instance with the AWS Deep Learning AMI.

aws ec2 run-instances --image-id $AMI_ID \ --key-name $KEYPAIR_NAME --security-group-ids $SG_ID \ --subnet-id $SUBNET_ID --instance-type c5.large \ --elastic-inference-accelerator Type=eia1.large

That’s it! You don’t need to learn any new APIs to use Amazon Elastic Inference: simply pass an extra parameter describing the accelerator type. After a few minutes, the instance is up and we can connect to it.

Accelerating Apache MXNet

In this classic example, we will load a large pre-trained convolution neural network on the Amazon Elastic Inference Accelerator (if you’re not familiar with pre-trained models, I covered the topic in a previous post). Specifically, we’ll use a ResNet-152 network trained on the ImageNet dataset.

Then, we’ll simply classify an image on the Amazon Elastic Inference Accelerator

import mxnet as mx import numpy as np from collections import namedtuple Batch = namedtuple('Batch', ['data']) # Download model (ResNet-152 trained on ImageNet) and ImageNet categories path='http://data.mxnet.io/models/imagenet/' [mx.test_utils.download(path+'resnet/152-layers/resnet-152-0000.params'), mx.test_utils.download(path+'resnet/152-layers/resnet-152-symbol.json'), mx.test_utils.download(path+'synset.txt')] # Set compute context to Elastic Inference Accelerator # ctx = mx.gpu(0) # This is how we'd predict on a GPU ctx = mx.eia() # This is how we predict on an EI accelerator # Load pre-trained model sym, arg_params, aux_params = mx.model.load_checkpoint('resnet-152', 0) mod = mx.mod.Module(symbol=sym, context=ctx, label_names=None) mod.bind(for_training=False, data_shapes=[('data', (1,3,224,224))], label_shapes=mod._label_shapes) mod.set_params(arg_params, aux_params, allow_missing=True) # Load ImageNet category labels with open('synset.txt', 'r') as f: labels = [l.rstrip() for l in f] # Download and load test image fname = mx.test_utils.download('https://github.com/dmlc/web-data/blob/master/mxnet/doc/tutorials/python/predict_image/dog.jpg?raw=true') img = mx.image.imread(fname) # Convert and reshape image to (batch=1, channels=3, width, height) img = mx.image.imresize(img, 224, 224) # Resize to training settings img = img.transpose((2, 0, 1)) # Channels img = img.expand_dims(axis=0) # Batch size # img = img.as_in_context(ctx) # Not needed: data is loaded automatically to the EIA # Predict the image mod.forward(Batch([img])) prob = mod.get_outputs()[0].asnumpy() # Print the top 3 classes prob = np.squeeze(prob) a = np.argsort(prob)[::-1] for i in a[0:3]: print('probability=%f, class=%s' %(prob[i], labels[i]))

As you can see, there are only a couple of differences:

  • I set the compute context to mx.eia(). No numbering is required, as only one Amazon Elastic Inference accelerator may be attached on an Amazon EC2 instance.
  • I did not explicitly load the image on the Amazon Elastic Inference accelerator, as I would have done with a GPU. This is taken care of automatically.

Running this example produces the following result.

probability=0.979113, class=n02110958 pug, pug-dog probability=0.003781, class=n02108422 bull mastiff probability=0.003718, class=n02112706 Brabancon griffon

What about performance? On our c5.large instance, this prediction takes about 0.23 second on the CPU, and only 0.031 second on its eia1.large accelerator. For comparison, it takes about 0.015 second on a p3.2xlarge instance equipped with a full-fledged NVIDIA V100 GPU. If we use a eia1.medium accelerator instead, this prediction takes 0.046 second, which is just as fast as a p2.xlarge (0.042 second) but at a 75% discount!

Accelerating TensorFlow

You can use TensorFlow Serving to serve accelerated predictions: it’s a model server which loads saved models and serves high-performance prediction through REST APIs and gRPC.

Amazon Elastic Inference includes an accelerated version of TensorFlow Serving, which you would use like this.

$ ei_tensorflow_model_server --model_name=resnet --model_base_path=$MODEL_PATH --port=9000 $ python resnet_client.py --server=localhost:9000

Now Available

I hope this post was informative. Amazon Elastic Inference is available now in US East (N. Virginia and Ohio), US West (Oregon), EU (Ireland) and Asia Pacific (Seoul and Tokyo). You can start building applications with it today!

Julien;

Categories: Cloud

Amazon DynamoDB On-Demand – No Capacity Planning and Pay-Per-Request Pricing

AWS Blog - Wed, 11/28/2018 - 09:15

Just a few years ago, creating a database that could support your business at any scale while providing consistent low latency was a daunting task. That changed for me in 2012 while reading Werner Vogels’ blog post announcing Amazon DynamoDB (it was a few months before I joined AWS). DynamoDB was built on the principles in the original Dynamo paper that Amazon published in 2007. Over the years, lots of new features have been introduced to further simplify how AWS customers use databases. You can now create fully managed, multi-region, multi-master database tables with features such as encryption at rest, point-in-time recovery, in-memory caching, and a 99.99% uptime service level agreement (SLA).

Amazon DynamoDB on-demand

Today we are introducing Amazon DynamoDB on-demand, a flexible new billing option for DynamoDB capable of serving thousands of requests per second without capacity planning. DynamoDB on-demand offers simple pay-per-request pricing for read and write requests so that you only pay for what you use, making it easy to balance costs and performance. For tables using on-demand mode, DynamoDB instantly accommodates customers’ workloads as they ramp up or down to any previously observed traffic level. If the level of traffic hits a new peak, DynamoDB adapts rapidly to accommodate the workload.

In the DynamoDB console, you can choose the on-demand read/write capacity mode when creating a new table, or change it later in the Capacity tab.

Tables using on-demand mode support all DynamoDB features (such as encryption at rest, point-in-time recovery, global tables, and so on) with the exception of auto scaling, which is not applicable with this mode.

Indexes created on a table using on-demand mode inherit the same scalability and billing model. You don’t need to specify throughput capacity settings for indexes, and you pay by their use. If you don’t have read/write traffic to a table using on-demand mode and its indexes, you only pay for the data storage.

DynamoDB on-demand is useful if your application traffic is difficult to predict and control, your workload has large spikes of short duration, or if your average table utilization is well below the peak. For example:

  • New applications, or applications whose database workload is complex to forecast
  • Developers working on serverless stacks with pay-per-use pricing
  • SaaS provider and independent software vendors (ISVs) who want the simplicity and resource isolation of deploying a table per subscriber

You can change a table from provisioned capacity to on-demand once per day. You can go from on-demand capacity to provisioned as often as you want.

A quick performance test

Let’s test some load on a newly created DynamoDB table using on-demand mode!

I created two serverless applications:

  • The first application creates a REST API on top of a DynamoDB table using an AWS Lambda function and Amazon API Gateway. Using this API, you can read, add, update, and delete items in the table using HTTP methods such as get, post, put, delete.
  • The second application starts 1,000 Lambda functions in parallel to generate load on the API endpoint, using random HTTP methods and random data for the items.

Each load generating function runs 100 concurrent requests, and when they are all terminated starts another 100, and so on, for one minute. There is no ramp-up period. Load generation starts immediately at full speed!

As you can see in the metrics tab for this table in the DynamoDB console, I reached a peak of almost 5,000 requests per second very quickly and without any throttling.

The scaling of the serverless stack, from API Gateway to the Lambda function and the DynamoDB table, was fully managed. I didn’t have to plan for the right throughput, and I could focus on the application logic I was building.

With DynamoDB on-demand you pay only for what you use. For example, in the US East (N. Virginia) region, you are charged $1.25 per million write requests units and $0.25 per million read request units, plus the usual data storage costs.

You can use the AWS Command Line Interface (CLI), AWS SDKs, and AWS CloudFormation to create a table using on-demand mode or to change the read/write capacity mode of an existing table.

Available now

The DynamoDB on-demand is available globally in all commercial regions.

I am really excited by the new possibilities for developers, ISVs and SaaS providers, and I look forward to seeing what you build with pay-per-request billing.

Categories: Cloud

New – Amazon FSx for Lustre

AWS Blog - Wed, 11/28/2018 - 08:32

A pebibyte (PiB – 1,125,899,906,842,624 bytes) is an impressive amount of data, slightly less than half of the estimated memory capacity of a human brain. Data lakes, High-Performance Computing (HPC), and Electronic Design Automation (EDA) applications traditionally work at this scale, as do more recent data-intensive applications such as Machine Learning and media processing.

Amazon FSx for Lustre
Today we are launching Amazon FSx for Lustre, designed to meet the needs of these applications and others that you will undoubtedly dream up. Based on the mature and popular Lustre open source project, Amazon FSx for Lustre is a highly parallel file system that supports sub-millisecond access to petabyte-scale file systems. Thousands of simultaneous clients (EC2 instances and on-premises servers) can drive millions of IOPS (Input/Output Operations per Second) and transfer hundreds of gigibytes of data per second.

You can create a file system in minutes, mount it on any number of clients, and start accessing it right away. This is a fully managed service so there’s nothing to maintain and nothing to administer. You can build standalone file systems for ephemeral use, or you can seamlessly join them to an S3 bucket and then access the contents of the bucket as if it were a Lustre file system. Each file system is backed by NVMe SSD storage, provisioned in increments of 3.6 TiB, and designed to deliver 200 Mbps of aggregate throughput at 10,000 IOPS for every 1 TiB of provisioned capacity.

Creating a Lustre File System
You can create a Lustre file system from the AWS Management Console, CLI, or by calling the CreateFileSystem function. I’ll use the CLI today; I simply specify the subnets for the Lustre endpoints and the desired storage capacity:

$ aws fsx create-file-system --file-system-type LUSTRE --storage-capacity 3600 --subnet-ids subnet-009a1149 ---------------------------------------------------------------------------------------------- | CreateFileSystem | +--------------------------------------------------------------------------------------------+ || FileSystem || |+-----------------+------------------------------------------------------------------------+| || CreationTime | 1542666225.28 || || DNSName | fs-00a2e062546ff4fce.fsx.us-east-1.amazonaws.com || || FileSystemId | fs-00a2e062546ff4fce || || FileSystemType | LUSTRE || || Lifecycle | CREATING || || OwnerId | 012345678912 || || ResourceARN | arn:aws:fsx:us-east-1:012345678912:file-system/fs-00a2e062546ff4fce || || StorageCapacity| 3600 || || VpcId | vpc-e68d9c81 || |+-----------------+------------------------------------------------------------------------+| ||| LustreConfiguration ||| ||+----------------------------------------------------------------+-----------------------+|| ||| WeeklyMaintenanceStartTime | 5:09:00 ||| ||+----------------------------------------------------------------+-----------------------+|| ||| SubnetIds ||| ||+----------------------------------------------------------------------------------------+|| ||| subnet-009a1149 ||| ||+----------------------------------------------------------------------------------------+||

This takes about 5 minutes and then it becomes AVAILABLE:

$ aws fsx describe-file-systems --file-system-id fs-00a2e062546ff4fce | grep Lifecycle || Lifecycle | AVAILABLE ||

My EC2 instance already has the Lustre kernel modules and the Lustre client installed:

I create a mount point and mount my Lustre file system:

$ sudo mkdir /fsx $ sudo mount -t lustre fs-00a2e062546ff4fce.fsx.us-east-1.amazonaws.com@tcp:/fsx /fsx

And my 3.4 TiB Lustre file system is ready to use:

I can also create a file system that sits in front of an S3 bucket (or a prefixed section of an S3 bucket). This allows me to treat my bucket as a data lake, and to process it using tools and applications that are file-based. I simply include the bucket name as the ImportPath when I create the file system:

$ aws fsx create-file-system --file-system-type LUSTRE --storage-capacity 3600 \ --subnet-ids subnet-009a1149 --lustre-configuration ImportPath=s3://jbarr-src

My bucket has about 1 million files inside, so the creation process takes about 30 minutes (the team told me that this takes about 500 files per second). Here is my bucket:

And here is what it looks like from my EC2 instance:

At this point, the Lustre file system contains all of the metadata (names, dates, sizes, and so forth) for my objects but it does not have the actual file data. This data is copied from S3 on an as-needed basis. As a result, this command will not access S3:

$ find . -type f

And this one will, with a small latency penalty for each access because objects are copied from S3 to the file system on an as-needed basis:

$ find . -type f -exec grep -l -i main {} \;

If I understand my code’s access pattern, I can use the hsm_restore option of the lfs command to pre-load them. Perhaps I plan to analyze all of the C header files:

$ find . -type f -name '*.h' -print0 | \ xargs -0 -n 50 -P 8 sudo lfs hsm_restore

Any changes that I make to the files remain within the file system. I can export changed files back to S3 using the hsm_archive option of the lfs command:

$ sudo lfs hsm_archive README.md $ sudo lfs hsm_action README.md

The first command initiates the export operation and the second one indicates that it is complete by printing NOOP. The changed files are written to the same bucket, prefixed by the ExportPath of the file system:

I can discover the ExportPath from the command line:

$ aws fsx describe-file-systems --file-system-id fs-086f5160a68bc158b | grep Path |||| ExportPath | s3://jbarr-src/FSxLustre20181120T005845Z |||| |||| ImportPath | s3://jbarr-src ||||

Each file system publishes a rich set of metrics to CloudWatch:

There’s a lot more, but I’m just about out of space! For example, I didn’t show you the scale that you can achieve using Amazon FSx for Lustre. I used one client, but could just have easily used thousands.

Things to Know
Here are a couple of interesting things to keep in mind regarding Amazon FSx for Lustre:

Console Access – I wrote this post using the CLI; a full console is also available.

Regions – You can create Lustre file systems in the US East (N. Virginia), US West (Oregon), US East (Ohio), and Europe (Ireland) Regions.

Pricing – Pricing is based on the amount of storage that you have provisioned, and starts at $0.14 per GiB per month in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) Regions.

Access – You can access your file systems from EC2 instances. You can also use AWS Direct Connect to connect your existing data center or colo to AWS, and access your file systems from there.

Security – Access to each file system goes through a security group, with IAM policies for fine-grained access control. Data at rest is encrypted using a 256-bit block cypher and keys managed by Amazon FSx for Lustre.

Available Now
Amazon FSx for Lustre is available now and you can start using it today!

Jeff;

 

 

Categories: Cloud

New – Amazon FSx for Windows File Server – Fast, Fully Managed, and Secure

AWS Blog - Wed, 11/28/2018 - 08:29

Organizations that want to run Windows applications on the cloud are commonly looking for network file storage that’s fully compatible with their applications and their Windows environments. For example, enterprises use Active Directory for identification and Windows Access Control Lists for fine-grained control over access to folders and files, and their applications typically rely on storage that provides full Windows file system (NTFS file system) compatibility.

Amazon FSx for Windows File Server
Amazon FSx for Windows File Server fits all of these needs, and more. It was designed from the ground up to work with your existing Windows applications and environments, making lift-and-shift of your Windows workloads to the cloud super-easy. You get a native Windows file system backed by fully-managed Windows file servers, accessible via the widely adopted SMB (Server Message Block) protocol. Built on SSD storage, Amazon FSx for Windows File Server delivers the throughput, IOPS, and consistent sub-millisecond performance that you (and your Windows applications) expect.

Here are the most important things to know:

Accessibility & Protocol Support – You can access your shares from Amazon Elastic Compute Cloud (EC2) instances, Amazon WorkSpaces virtual desktops, Amazon AppStream 2.0 applications, and VMware Cloud on AWS. Versions 2.0 through 3.1.1 of SMB are supported, allowing you to use Windows versions starting from Windows 7 and Windows Server 2008, and current versions of Linux (via Samba). Active Directory integration is built in, allowing you to easily integrate with your existing enterprise environment.

Performance and TunabilityAmazon FSx for Windows File Server delivers consistent, sub-millisecond latency. You can set the file system size and throughput (in megabytes per second) independently, with plenty of latitude in each dimension. File systems can be as big as 64 TB, and can deliver up to 2,048 MB/second of throughput.

Management – Your file systems are fully managed and data is stored in redundant form within an AWS Availability Zone. You don’t have to worry about attaching and formatting additional storage devices, updating Windows Server, or recovering from hardware failures. Incremental file-system consistent backups are taken automatically every day, with the option to take additional backups when needed.

Security – You get multiple levels of access control and data protection. File system endpoints are created within Virtual Private Clouds (VPCs) and access is governed by Security Groups. Windows ACLs are used to control access to folders and files; IAM roles are used to control access to administrative functions, with administrative activities logged to AWS CloudTrail. Your data is encrypted in transit and (using a KMS key that you can control) at rest. The service is PCI-DSS compliant and can be used to build HIPAA-compliant applications.

Multi-AZ Deployment – You create file systems in distinct AWS Availability Zones, and can use Microsoft DFS to set up automatic replication and failover between them. You can also use Microsoft DFS Namespaces to create shared, common namespaces that span multiple file systems and provide up to 300 PB of storage.

Creating a File System
Amazon FSx for Windows File Server is easy to use. I start by confirming that I have a Active Directory with a Domain Controller in the VPC subnet (subnet-009a1149) where I plan to create my file system’s endpoints:

For testing purposes, I also have an EC2 instance running Windows in the same subnet:

I open the Amazon FSx Console, and click Create file system:

I choose my file system option:

I specify a name, size, optional throughput, and other parameters for my new file system, and click Review summary to proceed:

On another browser tab I verify that the security group for the file system is configured to allow connections from my EC2 instance on the desired ports (135, 445, and 55555):

On the next page I review the settings and the estimated monthly costs, and click Create file system. My file system starts out in the Creating status and transitions to Available in minutes:

I can see an overview at a glance:

And I can click the Network & Security tab to get the DNS name for my file system:

I copy the DNS name, hop over to my EC2 instance, open Explorer, and Map my file system (a shared named share is created automatically):

Then I can use it like any other share (I’m sure that your use case is better than mine, but perhaps not as historically significant):

Each file system includes one share (named share) automatically. I can connect to the file system and create additional shares using the standard Windows tools and wizards:

File-system consistent backups are made daily during the backup window for the file system, and are retained for up to 35 days, as specified when the file system was created. I can also make backups on an as-needed basis:

Available Now
Amazon FSx for Windows File Server is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions, with expansion to other Regions planned for the coming months. Pricing is based on the amount of storage and throughput that you configure.

Jeff;

 

Categories: Cloud

New – Amazon Kinesis Data Analytics for Java

AWS Blog - Tue, 11/27/2018 - 17:14

Customers are using Amazon Kinesis to collect, process, and analyze real-time streaming data. In this way, they can react quickly to new information from their business, their infrastructure, or their customers. For example, Epic Games ingests more than 1.5 million game events per second for its popular online game, Fornite.

With Amazon Kinesis Data Analytics you can process data in real-time using standard SQL. While SQL provides an easy way to quickly query large volumes of streaming data without learning new frameworks or languages, many customers also want to build more sophisticated data processing applications using general-purpose programming languages.

Using Java with Amazon Kinesis Data Analytics

Today, we are introducing support for Java in Amazon Kinesis Data Analytics. Now, developers can use their own Java code to create powerful real-time applications that process streaming data like continuously transforming and loading data into their data lakes, generating metrics to feed real-time gaming leaderboards, applying machine learning models to data streams from connected devices, and more.

To use this new functionality, developers build applications using open source libraries which include built-in operators for common data processing functions that allow applications to organize, transform, aggregate, and analyze data at any scale. These libraries are both open source and you can run them anywhere:

  • Apache Flink, an open source framework and engine for processing data streams.
  • AWS SDK for Java, providing Java APIs for many AWS services.

Developers can use these Java libraries within their Integrated Development Environment (IDE) of choice. Using these libraries, the following AWS services can be integrated with as little as one line of code:

  • Streaming Data Sources: Amazon Kinesis Data Streams
  • Streaming Destinations: Amazon S3, Amazon DynamoDB, Amazon Kinesis Data Streams, Amazon Kinesis Data Firehose

In addition to the pre-built AWS integrations, the Java libraries include more connectors to tools like Cassandra, ElasticSearch, RabbitMQ, Redis, and more, and the ability to build custom integrations.

Building a Kinesis Data Streams Java Application

I prepared a simple Java application that implements the “mandatory” word count example for data processing. I send some paragraphs of text in input and I get, every five seconds, the number of times each word is being used as output.

First, I create two Kinesis Data Streams:

  • TextInputStream, where I am going to send my input records
  • WordCountOutputStream, where I am going to read the output of the Java application

Here is the code of the word-count Java application. To read and write from Kinesis Data Streams, I am using the Kinesis Connector from the Apache Flink project.

public class StreamingJob { private static final String region = "us-east-1"; private static final String inputStreamName = "TextInputStream"; private static final String outputStreamName = "WordCountOutputStream"; private static DataStream<String> createSourceFromStaticConfig( StreamExecutionEnvironment env) { Properties inputProperties = new Properties(); inputProperties.setProperty(ConsumerConfigConstants.AWS_REGION, region); inputProperties.setProperty(ConsumerConfigConstants.STREAM_INITIAL_POSITION, "LATEST"); return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, new SimpleStringSchema(), inputProperties)); } private static FlinkKinesisProducer<String> createSinkFromStaticConfig() { Properties outputProperties = new Properties(); outputProperties.setProperty(ConsumerConfigConstants.AWS_REGION, region); FlinkKinesisProducer<String> sink = new FlinkKinesisProducer<>(new SimpleStringSchema(), outputProperties); sink.setDefaultStream(outputStreamName); sink.setDefaultPartition("0"); return sink; } public static void main(String[] args) throws Exception { final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); DataStream<String> input = createSourceFromStaticConfig(env); input.flatMap(new Tokenizer()) .keyBy(0) .timeWindow(Time.seconds(5)) .sum(1) .map(new MapFunction<Tuple2<String, Integer>, String>() { @Override public String map(Tuple2<String, Integer> value) throws Exception { return value.f0 + "," + value.f1.toString(); } }) .addSink(createSinkFromStaticConfig()); env.execute("Word Count"); } public static final class Tokenizer implements FlatMapFunction<String, Tuple2<String, Integer>> { @Override public void flatMap(String value, Collector<Tuple2<String, Integer>> out) { String[] tokens = value.toLowerCase().split("\\W+"); for (String token : tokens) { if (token.length() > 0) { out.collect(new Tuple2<>(token, 1)); } } } } }

The most important part of the application is the manipulation of the input object, where I apply a few DataStream Transformations:

  1. I start with a DataFrame containing the String from the input stream.
  2. I use a Tokenizer in a FlatMap to split the sentence into “words”, each word followed by the number “1”.
  3. I apply the KeyBy operator to logically partition the stream in respect to the “word”.
  4. I use a 5 seconds tumbling window.
  5. I aggregate within the window, summing up for each word the number “1” to count them.
  6. I use a simple Map for each record to join the word and the number into a comma-separated values (CSV) String that I send to the output stream.

One of the most powerful operators shown here is the KeyBy operator. It enables you to re-organize a particular stream by a specified key in real-time. This type of re-keying enables further downstream operations like aggregations, counts, and much more. This enables you to set up streaming map-reduce on different keys within the same application.

I build the Java application using Maven and load the output JAR to an Amazon Simple Storage Service (S3) bucket in the region where I want to deploy the application. In the Kinesis Data Analytics console, I create a new application and select “Flink” as runtime:

I then configure the application to use the code on my S3 bucket. The console updates the IAM role for the application to have permissions to read the code.

You can optionally add key/value properties to the configuration of the application. You can read those properties from within the application, to provide customization at deployment time.

For monitoring, I leave the default metrics. I enable logging to Amazon CloudWatch, for errors only.

Don’t forget to add permissions to the IAM role created by the console to allow the Kinesis Analytics application to read and write from the streams used for input and output, TextInputStream and WordCountOutputStream in my case.

I can now start the application with the “Run” button, and when it is running, I use a script that I prepared to put some text (I am using a description of the Amazon Kinesis platform) in the input stream:

$ python put_records.py TextInputStream Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data...

The behavior of my application is summarized in the console in the Application Graph, a visual representation of the data flow consisting of operators and intermediate results (complex applications, using multiple streams, have a much more interesting graph):

To read the output stream, I am using a Lambda function written in Python. I am using the one provided with the Kinesis Record Aggregation & Deaggregation Modules for AWS Lambda, that provides automatic “de-aggregation” of records aggregated by the Amazon Kinesis Producer Library (KPL).

As expected, in the CloudWatch Logs console I get the list of the words and the number of times they were used, updated every 5 seconds by the Lambda function:

Pricing and Availability

With Amazon Kinesis Data Analytics for Java, you pay only for what you use. Pricing is similar to Amazon Kinesis Data Analytics for SQL, but there are a few differences.

For Java applications, you are charged a single additional Amazon Kinesis Processing Unit (KPU) per application, used for application orchestration. Java applications are also charged for running application storage and durable application backups. Running application storage is used for Amazon Kinesis Data Analytics’ stateful processing capabilities and is charged per GB-month. Durable application backups are optional and provide a point-in-time recovery point for applications, charged per GB-month.

For example, pricing is $0.11 per KPU hour in US East (N. Virginia), and you are charged for running application storage ($0.10 per GB-month) and durable application backups ($0.023 per GB-month).

Available Now

Amazon Kinesis Data Analytics for Java is available now in US East (N. Virginia), US East (Ohio), US West (Oregon), EU West (Ireland).

I only scratched the surface of the capabilities for stream processing enabled by the support of Java in Amazon Kinesis Data Analytics. I think this is a powerful tool that can enable new use cases. Let me know what you are going to build with it!

Categories: Cloud

New – Amazon CloudWatch Logs Insights – Fast, Interactive Log Analytics

AWS Blog - Tue, 11/27/2018 - 15:02

Many AWS services create logs. Off the top of my head there are VPC Flow Logs, Route 53 Logs, Lambda Logs, CloudTrail Logs (for AWS API calls), RDS Logs, IoT Logs, ECS Logs, API Gateway Logs, and S3 Server Access Logs, EC2 Instance Logs (via the CloudWatch Agent), to name a few. The services that you run on your EC2 instances (Apache, Tomcat, NGINX, and the like) also produce logs, and your application code probably does the same.

Embedded within these logs are the data points, patterns, trends, and insights that you can use to understand how your applications and AWS resources are behaving, identify room for improvement, and to address operational issues. But, as usual, there’s a catch. The breadth of formats and data elements and the sheer size of the raw logs can make analysis difficult. When individual AWS customers routinely generate 100 terabytes or more of log files each day, old-school tools such as find and grep no longer suffice!

CloudWatch Logs Insights
The new CloudWatch Logs Insights will help! This is a fully managed service that is designed to work at cloud scale, with no setup or maintenance required. It plows through massive logs in seconds, and gives you fast, interactive queries and visualizations. It can can handle any log format, and auto-discovers fields from JSON logs. As you will see, it is very flexible, and will quickly become one of your favorite tools for diving in to your logs.

CloudWatch Logs Insights includes a sophisticated ad-hoc query language, with commands to fetch desired event fields, filter based on conditions, calculate aggregate statistics including percentiles and time series aggregations, sort on any desired file, and limit the number of events returned by a query. You can also use regular expressions to extract data from an event field, creating one or more ephemeral fields that can be further processed by the query. You can visualize query results using line and stacked area charts, and you can add queries to a CloudWatch Dashboard. There’s even a rich set of sample queries to get you started.

Insights in Action
To get started, I open the CloudWatch Console and click Insights:

Then I choose the desired Log Group using the menu:

I can enter a query, or I can choose one of the samples:

As you can see, sample queries are supplied for several different types of logs. I pick the first one, click Run query, the logs are scanned and the results are visible within seconds:

I can add a filter to my query and run it again. Perhaps I want to focus on EC2 API calls, so I use a pipe ( | ) and the filter command:

I can filter by an absolute or relative time range:

I can also generate visualizations. Here’s a simple one: Amazon RDS memory usage metrics for the last 30 minutes, grouped into 1-minute bins:

CloudWatch Logs Insights discovers all of the fields in the events and tells me how common they are in the selected log:

I can use this to build my queries interactively:

For queries that do not do any aggregation, I can expand an event and see all of the fields:

The query language supports six types of commands:

fields – Retrieves one or more log fields. It can also make use of functions such as abs, sqrt, strlen, trim, and more.

filter – Retrieves log fields based on one or more conditions built from Boolean operators, comparison operators, and regular expressions.

stats – Calculates aggregate statistics such as sum, avg, count, min, max, and percentile for a log field, across a given time interval (specified using the optional by modifier).

sort – Sorts logs events in ascending or descending order.

limit – Limits the number of log events returned by a query.

parse – Extracts data from a log field, creating one or more ephemeral fields that can be further processed by the query.

The language also supports a rich set of arithmetic & comparison operators, numeric functions, string functions, date/time functions, and aggregation functions.

As usual, I have shown you a fairly simply subset of the functionality and power that is available to you. Here are a couple of things that you can try on your own:

Add to Dashboard – After you have created an insightful query, click Add to Dashboard, then select an existing dashboard or create a new one:

Copy Query Results – After your have used CloudWatch Logs Insights to discover an issue, click the Action menu and choose Copy query results:

Then you can paste the results into your ticketing system for resolution.

API and CLI Access – In addition to console access, this feature is accessible via the AWS Command Line Interface (CLI) and the AWS SDKs.

CloudWatch Integration – You can write a bit of glue code to run queries, use the results to publish Custom Metrics. Then you can visualize them, set alarms, and so forth, all with the goal of simplifying and accelerating your troubleshooting.

Available Now
CloudWatch Logs Insights is available now in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Paris), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), and South America (São Paulo) Regions and you can start using it today.

Pricing is based on the amount of ingested log data scanned for each query; you pay $0.005 per GB in US East (N. Virginia), with similar prices in the other regions.

Categories: Cloud

New – AWS Elemental MediaConnect for ingestion and distribution of video in the cloud.

AWS Blog - Tue, 11/27/2018 - 15:01

Before AWS, I worked at an organization that owned and operated their own sports TV channel that provided content by aggregating tens of venues worth of local sports feeds into one 24 hour TV channel. The infrastructure and logistics of operating a broadcast grade network on this scale were immense, and it always proved difficult and expensive to change and maintain.

This was not a localized problem, as media companies and aggregators face similar challenges with their own broadcast infrastructure. Consolidating feeds from the non-urban area via satellite trucks, distribute video streams to multiple regions and countries, all while maintaining reliability and broadcast capability is still a difficult task and requires capital investment.

 

Introducing AWS Elemental MediaConnect

AWS Elemental MediaConnect is a new service that makes it easy for broadcasters and other premium video customers to reliably ingest live video into the cloud and securely transmit it to multiple destinations through the AWS global network. AWS Elemental MediaConnect gives customers the reliability, security, and visibility that they are used to with satellite transmission, with the flexibility and cost-effective economics only possible with an internet-based transmission. It lets any customer, from a small video producer covering a local sporting event to a national broadcast television network with multiple 24×7 live TV channels, reliably ingest their content from sources outside the AWS cloud (like a sporting venue or a TV studio), and securely transmit it to multiple destinations with broadcast-grade reliability and operational visibility. These destinations can be a customer’s own AWS-based video processing systems or a destination on the internet.

 

What you need to know:

Broadcast Reliability – AWS Elemental MediaConnect is engineered to meet broadcast level reliability with optimizations to reduce jitter and buffering. MediaConnect solves this by offering customers a choice of video transmission protocols (like Real-Time Transport Protocol (RTP), RTP with forward error correction (FEC), and the Zixi protocol) that are used by video professionals to ensure reliability. MediaConnect uses the low latency, high bandwidth AWS global network to distribute and replicate feeds between AWS regions.

Industry-Grade Security – MediaConnect supports broadcasters’ requirements for security. It provides the option to encrypt streams using standard AES-256 encryption and stores keys securely using AWS Secrets Manager. Together with the replication feature of MediaConnect, which allows users to create multiple outputs, customers can securely syndicate their content to distributors inside and outside AWS.

Visibility & Operations – Finally, AWS Elemental MediaConnect gives video professionals visibility into the health of their content streams. With MediaConnect, they track the health of their mission-critical streams using a combination of quality of service (QoS) alarms, and real-time signal telemetry, with no additional setup. Furthermore, MediaConnect is tightly integrated with other AWS Elemental Media Services and CloudWatch, allowing for easy creation of dashboards and alarming.

 

AWS Elemental MediaConnect in Action

Today I will be setting up ShaunTV, a global video on demand platform just for me. I will be using a live feed from an on-premises media encoder that I wish to ingest into the cloud and distribute to multiple regions. This is similar to a traditional media broadcaster or regional aggregator who does this across several feeds. Getting started is as simple as creating a new video feed and connecting it to AWS Elemental MediaConnect.

 

Using the console, I create a new flow, where I define my ingest options. In this case, the AWS Elemental team are providing me with a video feed from one of their on-premises encoders. I choose a standard source and select the ingestion protocol. Zixi protocol is a commercial video distribution format that is used widely in the media industry and will be our source format for today. Providing a whitelisted CIDR block allows me to restrict access to my MediaConnect ingestion point.

From here I can choose to provide a decryption key, which will allow me to decode an encrypted stream. In this case, my stream is not encrypted and I continue to create the flow.

Next step is to turn on the flow and start receiving video! Now I want to do two things; using the built-in integration with other AWS services, I will connect my MediaConnect stream to AWS Elemental MediaLive which provides encoding to use with end device for my video feed. Second, I want to distribute my video from Europe (Dublin) to the US (Oregon) region so I can make ShaunTV global!

Granting entitlements allows me to generate an ARN (AWS Resource Number) that I can use to share with other AWS accounts which are hosted in the same region as our MediaConnect endpoint. I am using the same account, so we proceed to build a new flow using the entitled source option.

My ARN is now populated in my new flow, and I can push the flow live to watch my video in the US or distribute it to other AWS Elemental Services. You then send a single video stream to the ingest point, and MediaConnect will automatically replicate it to each of the specified destinations. You have access to real-time metrics from the ingest point, it’s easy to reroute flows on the fly from the AWS console or the MediaConnect API. We could distribute the video to many regions, on-premises locations, or third-party AWS accounts. We could build an entire video on demand platform with the other AWS Elemental Services. Unfortunately, space is limited and so is time with AWS re:Invent already underway, so I leave it to you to experiment from here!

Availability and Pricing

There are no upfront fees or minimum commitments; pay for data transferred using AWS Elemental MediaConnect plus an hourly price for each running flow. Available in 8 regions: US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), EU (Frankfurt, and EU (Ireland) regions.

Categories: Cloud

New – Amazon DynamoDB Transactions

AWS Blog - Tue, 11/27/2018 - 14:50

Over the years, customers have used Amazon DynamoDB for lots of different use cases, from building microservices and mobile backends to implementing gaming and Internet of Things (IoT) solutions. For example, Capital One uses DynamoDB to reduce the latency of their mobile applications by moving their mainframe transactions to a serverless architecture. Tinder migrated user data to DynamoDB with zero downtime, to get the scalability they need to support their global user base.

Developers sometimes need to implement business logic that requires multiple, all-or-nothing operations across one or more tables. This requirement can add unnecessary complexity to their implementation. Today, we are making these use cases easier to build on DynamoDB with native support for transactions!

Introducing Amazon DynamoDB Transactions

DynamoDB transactions provide developers atomicity, consistency, isolation, and durability (ACID) across one or more tables within a single AWS account and region. You can use transactions when building applications that require coordinated inserts, deletes, or updates to multiple items as part of a single logical business operation. DynamoDB is the only non-relational database that supports transactions across multiple partitions and tables.

Transactions bring the scale, performance, and enterprise benefits of DynamoDB to a broader set of workloads. Many use cases are easier and faster to implement using transactions, for example:

  • Processing financial transactions
  • Fulfilling and managing orders
  • Building multiplayer game engines
  • Coordinating actions across distributed components and services

Two new DynamoDB operations have been introduced for handling transactions:

  • TransactWriteItems, a batch operation that contains a write set, with one or more PutItem, UpdateItem, and DeleteItem operations. TransactWriteItems can optionally check for prerequisite conditions that must be satisfied before making updates. These conditions may involve the same or different items than those in the write set. If any condition is not met, the transaction is rejected.
  • TransactGetItems, a batch operation that contains a read set, with one or more GetItem operations. If a TransactGetItems request is issued on an item that is part of an active write transaction, the read transaction is canceled. To get the previously committed value, you can use a standard read.

Each transaction can include up to 10 unique items or up to 4 MB of data, including conditions.

With this new feature, DynamoDB offers multiple read and write options to meet different application requirements, providing huge flexibility to developers implementing complex, data-driven business logic:

  • Three options for reads—eventual consistency, strong consistency, and transactional.
  • Two for writes—standard and transactional.

For example, imagine you are building a game where players can buy items with virtual coins:

  • In the players table, each player has a number of coins and an inventory of purchased items.
  • In the items table, each item has a price and is marked as available (or not) with a Boolean value.

To purchase an item, you can now implement a single atomic transaction:

  1. First, check that the item is available and the player has the necessary coins.
  2. If those conditions are satisfied, the item is marked as not available and owned by the player.
  3. The purchased item is then added to the player inventory list.

In JavaScript, using the AWS SDK for JavaScript in Node.js, you would have code similar to this:

data = await dynamoDb.transactWriteItems({ TransactItems: [ { Update: { TableName: 'items', Key: { id: { S: itemId } }, ConditionExpression: 'available = :true', UpdateExpression: 'set available = :false, ' + 'ownedBy = :player', ExpressionAttributeValues: { ':true': { BOOL: true }, ':false': { BOOL: false }, ':player': { S: playerId } } } }, { Update: { TableName: 'players', Key: { id: { S: playerId } }, ConditionExpression: 'coins >= :price', UpdateExpression: 'set coins = coins - :price, ' + 'inventory = list_append(inventory, :items)', ExpressionAttributeValues: { ':items': { L: [{ S: itemId }] }, ':price': { N: itemPrice.toString() } } } } ] }).promise();

Using Transactions

Transactions are enabled for all single-region DynamoDB tables and are disabled on global tables by default. You can choose to enable transactions on global tables by request, but replication across regions is asynchronous and eventually consistent. You may observe partially completed transactions during replication to other regions. Additionally, simultaneous writes to the same item in different regions are not guaranteed to be serially isolated.

Items are not locked during a transaction. DynamoDB transactions provide serializable isolation. If an item is modified outside of a transaction while the transaction is in progress, the transaction is canceled and an exception is thrown with details about which item or items caused the exception.

When creating an AWS Identity and Access Management (IAM) policy, there are no new permissions for TransactGetItems and TransactWriteItems. Existing DynamoDB UpdateItem, PutItem, DeleteItem, and GetItem actions authorize the use of those operations also within transactions. For example, if an IAM user has only PutItem permission, they can send a transaction with one or more put, but if they add a delete to the write set, it will get rejected because they do not have DeleteItem permission.

For any committed operation that was part of a transaction, DynamoDB Streams adds a new field, transaction-id, as a universally unique identifier (UUID) for the transaction. The in-order and exactly once semantics of DynamoDB Streams guarantee that eventually all updates of a TransactWriteItems request will be propagated through streams in an order that is consistent with the transaction serialization order.

Pricing, Monitoring, and Availability

There is no additional cost to enable transactions for DynamoDB tables. You only pay for the reads or writes that are part of your transaction. DynamoDB performs two underlying reads or writes of every item in the transaction, one to prepare the transaction and one to commit the transaction. The two underlying read/write operations are visible in your CloudWatch metrics. You should plan your costs, capacity, and performance needs assuming each transactional read performs two reads and each transactional write performs two writes.

DynamoDB transactions are available globally in all commercial regions.

I am really intrigued by these new capabilities. Please let me know what you are going to use them for!

Categories: Cloud

Pages

Subscribe to LAMP, Database and Cloud Technical Information aggregator


Main menu 2

by Dr. Radut