Jump to Navigation

Feed aggregator

Announcing Alexa for Business: Using Amazon Alexa’s Voice Enabled Devices for Workplaces

AWS Blog - Thu, 11/30/2017 - 09:00

There are only a few things more integrated into my day-to-day life than Alexa. I use my Echo device and the enabled Alexa Skills for turning on lights in my home, checking video from my Echo Show to see who is ringing my doorbell, keeping track of my extensive to-do list on a weekly basis, playing music, and lots more. I even have my family members enabling Alexa skills on their Echo devices for all types of activities that they now cannot seem to live without. My mother, who is in a much older generation (please don’t tell her I said that), uses her Echo and the custom Alexa skill I built for her to store her baking recipes. She also enjoys exploring skills that have the latest health and epicurean information. It’s no wonder then, that when I go to work I feel like something is missing. For example, I would love to be able to ask Alexa to read my flash briefing when I get to the office.



For those of you that would love to have Alexa as your intelligent assistant at work, I have exciting news. I am delighted to announce Alexa for Business, a new service that enables businesses and organizations to bring Alexa into the workplace at scale. Alexa for Business not only brings Alexa into your workday to boost your productivity, but also provides tools and resources for organizations to set up and manage Alexa devices at scale, enable private skills, and enroll users.

Making Workplaces Smarter with Alexa for Business

Alexa for Business brings the Alexa you know and love into the workplace to help all types of workers to be more productive and organized on both personal and shared Echo devices. In the workplace, shared devices can be placed in common areas for anyone to use, and workers can use their personal devices to connect at work and at home.

End users can use shared devices or personal devices. Here’s what they can do from each.

Shared devices

  1. Join meetings in conference rooms: You can simply say “Alexa, start the meeting”. Alexa turns on the video conferencing equipment, dials into your conference call, and gets the meeting going.
  2. Help around the office: access custom skills to help with directions around the office, finding an open conference room, reporting a building equipment problem, or ordering new supplies.

Personal devices

  1. Enable calling and messaging: Alexa helps make phone calls, hands free and can also send messages on your behalf.
  2. Automatically dial into conference calls: Alexa can join any meeting with a conference call number via voice from home, work, or on the go.
  3. Intelligent assistant: Alexa can quickly check calendars, help schedule meetings, manage to-do lists, and set reminders.
  4. Find information: Alexa can help find information in popular business applications like Salesforce, Concur, or Splunk.

Here are some of the controls available to administrators:

  1. Provision & Manage Shared Alexa Devices: You can provision and manage shared devices around your workplace using the Alexa for Business console. For each device you can set a location, such as a conference room designation, and assign public and private skills for the device.
  2. Configure Conference Room Settings: Kick off your meetings with a simple “Alexa, start the meeting.” Alexa for Business allows you to configure your conference room settings so you can use Alexa to start your meetings and control your conference room equipment, or dial in directly from the Amazon Echo device in the room.
  3. Manage Users: You can invite users in your organization to enroll their personal Alexa account with your Alexa for Business account. Once your users have enrolled, you can enable your custom private skills for them to use on any of the devices in their personal Alexa account, at work or at home.
  4. Manage Skills: You can assign public skills and custom private skills your organization has created to your shared devices, and make private skills available to your enrolled users.  You can create skills groups, which you can then assign to specific shared devices.
  5. Build Private Skills & Use Alexa for Business APIs:  Dig into the Alexa Skills Kit and build your own skills.  Then you can make these available to the shared devices and enrolled users in your Alexa for Business account, all without having to publish them in the public Alexa Skills Store.  Alexa for Business offers additional APIs, which you can use to add context to your skills and automate administrative tasks.

Let’s take a quick journey into Alexa for Business. I’ll first log into the AWS Console and go to the Alexa for Business service.


Once I log in to the service, I am presented with the Alexa for Business dashboard. As you can see, I have access to manage Rooms, Shared devices, Users, and Skills, as well as the ability to control conferencing, calendars, and user invitations.

First, I’ll start by setting up my Alexa devices. Alexa for Business provides a Device Setup Tool to setup multiple devices, connect them to your Wi-Fi network, and register them with your Alexa for Business account. This is quite different from the setup process for personal Alexa devices. With Alexa for Business, you can provision 25 devices at a time.

Once my devices are provisioned, I can create location profiles for the locations where I want to put these devices (such as in my conference rooms). We call these locations “Rooms” in our Alexa for Business console. I can go to the Room profiles menu and create a Room profile. A Room profile contains common settings for the Alexa device in your room, such as the wake word for the device, the address, time zone, unit of measurement, and whether I want to enable outbound calling.

The next step is to enable skills for the devices I set up. I can enable any skill from the Alexa Skills store, or use the private skills feature to enable skills I built myself and made available to my Alexa for Business account. To enable skills for my shared devices, I can go to the Skills menu option and enable skills. After I have enabled skills, I can add them to a skill group and assign the skill group to my rooms.

Something I really like about Alexa for Business, is that I can use Alexa to dial into conference calls. To enable this, I go to the Conferencing menu option and select Add provider. At Amazon we use Amazon Chime, but you can choose from a list of different providers, or you can even add your own provider if you want to.

Once I’ve set this up, I can say “Alexa, join my meeting”; Alexa asks for my Amazon Chime meeting ID, after which my Echo device will automatically dial into my Amazon Chime meeting. Alexa for Business also provides an intelligent way to start any meeting quickly. We’ve all been in the situation where we walk into a meeting room and can’t find the meeting ID or conference call number. With Alexa for Business, I can link to my corporate calendar, so Alexa can figure out the meeting information for me, and automatically dial in – I don’t even need my meeting ID. Here’s how you do that:

Alexa can also control the video conferencing equipment in the room. To do this, all I need to do is select the skill for the equipment that I have, select the equipment provider, and enable it for my conference rooms. Now when I ask Alexa to join my meeting, Alexa will dial-in from the equipment in the room, and turn on the video conferencing system, without me needing to do anything else.


Let’s switch to enrolled users next.

I’ll start by setting up the User Invitation for my organization so that I can invite users to my Alexa for Business account. To allow a user to use Alexa for Business within an organization, you invite them to enroll their personal Alexa account with the service by sending a user invitation via email from the management console. If I choose, I can customize the user enrollment email to contain additional content. For example, I can add information about my organization’s Alexa skills that can be enabled after they’ve accepted the invitation and completed the enrollment process. My users must join in order to use the features of Alexa for Business, such as auto dialing into conference calls, linking their Microsoft Exchange calendars, or using private skills.

Now that I have customized my User Invitation, I will invite users to take advantage of Alexa for Business for my organization by going to the Users menu on the Dashboard and entering their email address.  This will send an email with a link that can be used to join my organization. Users will join using the Amazon account that their personal Alexa devices are registered to. Let’s invite Jeff Barr to join my Alexa for Business organization.

After Jeff has enrolled in my Alexa for Business account, he can discover the private skills I’ve enabled for enrolled users, and he can access his work skills and join conference calls from any of his personal devices, including the Echo in his home office.


We’ve only scratched the surface in our brief review of the Alexa for Business console and service features.  You can learn more about Alexa for Business by viewing the Alexa for Business website, reading the admin and API guides in the AWS documentation, or by watching the Getting Started videos within the Alexa for Business console.

You can learn more about Alexa for Business by viewing the Alexa for Business website, watching the Alexa for Business overview video, reading the admin and API guides in the AWS documentation, or by watching the Getting Started videos within the Alexa for Business console.

Alexa, Say Goodbye and Sign off the Blog Post.”


Categories: Cloud

PHP 7.2.0 Released

PHP News - Thu, 11/30/2017 - 02:04
Categories: PHP

Keeping Time With Amazon Time Sync Service

AWS Blog - Wed, 11/29/2017 - 17:17

Today we’re launching Amazon Time Sync Service, a time synchronization service delivered over Network Time Protocol (NTP) which uses a fleet of redundant satellite-connected and atomic clocks in each region to deliver a highly accurate reference clock. This service is provided at no additional charge and is immediately available in all public AWS regions to all instances running in a VPC.

You can access the service via the link local IP address. This means you don’t need to configure external internet access and the service can be securely accessed from within your private subnets.


Chrony is a different implementation of NTP than what ntpd uses and it’s able to synchronize the system clock faster and with better accuracy than ntpd. I’d recommend using Chrony unless you have a legacy reason to use ntpd.

Installing and configuring chrony on Amazon Linux is as simple as:

sudo sudo yum erase ntp* sudo yum -y install chrony sudo service chronyd start

Alternatively, just modify your existing NTP config by adding the line server prefer iburst.

On Windows you can run the following commands in PowerShell or a command prompt:

net stop w32time w32tm /config /syncfromflags:manual /manualpeerlist:"" w32tm /config /reliable:yess net start w32time Leap Seconds

Time is hard. Science, and society, measure time with respect to the International Celestial Reference Frame (ICRF), which is computed using long baseline interferometry of distant quasars, GPS satellite orbits, and laser ranging of the moon (cool!). Irregularities in Earth’s rate of rotation cause UTC to drift from time with respect to the ICRF. To address this clock drift the International Earth Rotation and Reference Systems (IERS) occasionally introduce an extra second into UTC to keep it within 0.9 seconds of real time.

Leap seconds are known to cause application errors and this can be a concern for many savvy developers and systems administrators. The clock smooths out leap seconds some period of time (commonly called leap smearing) which makes it easy for your applications to deal with leap seconds.

This timely update should provide immediate benefits to anyone previously relying on an external time synchronization service.


Categories: Cloud

T2 Unlimited – Going Beyond the Burst with High Performance

AWS Blog - Wed, 11/29/2017 - 16:35

I first wrote about the T2 instances in the summer of 2014, and talked about how many workloads have a modest demand for continuous compute power and an occasional need for a lot more. This model resonated with our customers; the T2 instances are very popular and are now used to host microservices, low-latency interactive applications, virtual desktops, build & staging environments, prototypes, and the like.

New T2 Unlimited
Today we are extending the burst model that we pioneered with the T2, giving you the ability to sustain high CPU performance over any desired time frame while still keeping your costs as low as possible. You simply enable this feature when you launch your instance; you can also enable it for an instance that is already running. The hourly T2 instance price covers all interim spikes in usage if the average CPU utilization is lower than the baseline over a 24-hour window. There’s a small hourly charge if the instance runs at higher CPU utilization for a prolonged period of time. For example, if you run a t2.micro instance at an average of 15% utilization (5% above the baseline) for 24 hours you will be charged an additional 6 cents (5 cents per vCPU-hour * 1 vCPU * 5% * 24 hours).

To launch a T2 Unlimited instance from the EC2 Console, select any T2 instance and then click on Enable next to T2 Unlimited:

And here’s how to switch a running instance from T2 Standard to T2 Unlimited:

Behind the Scenes
As I described in my original post, each T2 instance accumulates CPU Credits as it runs and consumes them while it is running at full-core speed, decelerating to a baseline level when the supply of Credits is exhausted. T2 Unlimited instances have the ability to borrow an entire day’s worth of future credits, allowing them to perform additional bursting. This borrowing is tracked by the new CPUSurplusCreditBalance CloudWatch metric. When this balance rises to the level where it represents an entire day’s worth of future credits, the instance continues to deliver full-core performance, charged at the rate of $0.05 per vCPU per hour for Linux and $0.096 for Windows. These charged surplus credits are tracked by the new CPUSurplusCreditsCharged metric. You will be charged on a per-millisecond basis for partial hours of bursting (further reducing your costs) if you exhaust your surplus late in a given hour.

The charge for any remaining CPUSurplusCreditBalance is processed when the instance is terminated or configured as a T2 Standard. Any accumulated CPUCreditBalance carries over during the transition to T2 Standard.

The T2 Unlimited model is designed to spare you the trouble of watching the CloudWatch metrics, but (if you are like me) you will do it anyway. Let’s take a quick look at a t2.nano and watch the credits over time. First, CPU utilization grows to 100% and the instance begins to consume 5 credits every 5 minutes (one credit is equivalent to a VCPU-minute):

The CPU credit balance remains at 0 because the credits are being produced and consumed at the same rate. The surplus credit balance (tracked by the CPUSurplusCreditBalance metric) ramps up to 72, representing the credits that are being borrowed from the future:

Once the surplus credit balance hits 72, there’s nothing more to borrow from the future, and any further CPU usage is charged at the end of the hour, tracked with the CPUSurplusCreditsCharged metric. The instance consumes 5 credits every 5 minutes and earns 0.25, resulting in a net charge of 4.75 VCPU-minutes for each 5 minutes of bursting:

You can switch each of your instances back and forth between T2 Standard and T2 Unlimited at any time; all credit balances except CPUSurplusCreditsCharged remain and are carried over. Because T2 Unlimited instances have the ability to burst at any time, they do not receive the 30 minutes of credits given to newly launched T2 Standard instances. Also, since each AWS account can launch a limited number of T2 Standard instances with initial CPU credits each day, T2 Unlimited instances can be a better fit for use in Auto Scaling Groups and other scenarios where large numbers of instances come and go each day.

Available Now
You can launch T2 Unlimited instances today in the US East (Northern Virginia), US East (Ohio), US West (Northern California), US West (Oregon), Canada (Central), South America (São Paulo), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Asia Pacific (Seoul), EU (Frankfurt), EU (Ireland), and EU (London) Regions today.



Categories: Cloud

AWS Systems Manager – A Unified Interface for Managing Your Cloud and Hybrid Resources

AWS Blog - Wed, 11/29/2017 - 12:03

AWS Systems Manager is a new way to manage your cloud and hybrid IT environments. AWS Systems Manager provides a unified user interface that simplifies resource and application management, shortens the time to detect and resolve operational problems, and makes it easy to operate and manage your infrastructure securely at scale. This service is absolutely packed full of features. It defines a new experience around grouping, visualizing, and reacting to problems using features from products like Amazon EC2 Systems Manager (SSM) to enable rich operations across your resources.

As I said above, there are a lot of powerful features in this service and we won’t be able to dive deep on all of them but it’s easy to go to the console and get started with any of the tools.

Resource Groupings

Resource Groups allow you to create logical groupings of most resources that support tagging like: Amazon Elastic Compute Cloud (EC2) instances, Amazon Simple Storage Service (S3) buckets, Elastic Load Balancing balancers, Amazon Relational Database Service (RDS) instances, Amazon Virtual Private Cloud, Amazon Kinesis streams, Amazon Route 53 zones, and more. Previously, you could use the AWS Console to define resource groupings but AWS Systems Manager provides this new resource group experience via a new console and API. These groupings are a fundamental building block of Systems Manager in that they are frequently the target of various operations you may want to perform like: compliance management, software inventories, patching, and other automations.

You start by defining a group based on tag filters. From there you can view all of the resources in a centralized console. You would typically use these groupings to differentiate between applications, application layers, and environments like production or dev – but you can make your own rules about how to use them as well. If you imagine a typical 3 tier web-app you might have a few EC2 instances, an ELB, a few S3 buckets, and an RDS instance. You can define a grouping for that application and with all of those different resources simultaneously.


AWS Systems Manager automatically aggregates and displays operational data for each resource group through a dashboard. You no longer need to navigate through multiple AWS consoles to view all of your operational data. You can easily integrate your exiting Amazon CloudWatch dashboards, AWS Config rules, AWS CloudTrail trails, AWS Trusted Advisor notifications, and AWS Personal Health Dashboard performance and availability alerts. You can also easily view your software inventories across your fleet. AWS Systems Manager also provides a compliance dashboard allowing you to see the state of various security controls and patching operations across your fleets.

Acting on Insights

Building on the success of EC2 Systems Manager (SSM), AWS Systems Manager takes all of the features of SSM and provides a central place to access them. These are all the same experiences you would have through SSM with a more accesible console and centralized interface. You can use the resource groups you’ve defined in Systems Manager to visualize and act on groups of resources.


Automations allow you to define common IT tasks as a JSON document that specify a list of tasks. You can also use community published documents. These documents can be executed through the Console, CLIs, SDKs, scheduled maintenance windows, or triggered based on changes in your infrastructure through CloudWatch events. You can track and log the execution of each step in the documents and prompt for additional approvals. It also allows you to incrementally roll out changes and automatically halt when errors occur. You can start executing an automation directly on a resource group and it will be able to apply itself to the resources that it understands within the group.

Run Command

Run Command is a superior alternative to enabling SSH on your instances. It provides safe, secure remote management of your instances at scale without logging into your servers, replacing the need for SSH bastions or remote powershell. It has granular IAM permissions that allow you to restrict which roles or users can run certain commands.

Patch Manager, Maintenance Windows, and State Manager

I’ve written about Patch Manager before and if you manage fleets of Windows and Linux instances it’s a great way to maintain a common baseline of security across your fleet.

Maintenance windows allow you to schedule instance maintenance and other disruptive tasks for a specific time window.

State Manager allows you to control various server configuration details like anti-virus definitions, firewall settings, and more. You can define policies in the console or run existing scripts, PowerShell modules, or even Ansible playbooks directly from S3 or GitHub. You can query State Manager at any time to view the status of your instance configurations.

Things To Know

There’s some interesting terminology here. We haven’t done the best job of naming things in the past so let’s take a moment to clarify. EC2 Systems Manager (sometimes called SSM) is what you used before today. You can still invoke aws ssm commands. However, AWS Systems Manager builds on and enhances many of the tools provided by EC2 Systems Manager and allows those same tools to be applied to more than just EC2. When you see the phrase “Systems Manager” in the future you should think of AWS Systems Manager and not EC2 Systems Manager.

AWS Systems Manager with all of this useful functionality is provided at no additional charge. It is immediately available in all public AWS regions.

The best part about these services is that even with their tight integrations each one is designed to be used in isolation as well. If you only need one component of these services it’s simple to get started with only that component.

There’s a lot more than I could ever document in this post so I encourage you all to jump into the console and documentation to figure out where you can start using AWS Systems Manager.


Categories: Cloud

Announcing Amazon FreeRTOS – Enabling Billions of Devices to Securely Benefit from the Cloud

AWS Blog - Wed, 11/29/2017 - 10:39

I was recently reading an article on ReadWrite.com titled “IoT devices go forth and multiply, to increase 200% by 2021“, and while the article noted the benefit for consumers and the industry of this growth, two things in the article stuck with me. The first was the specific statement that read “researchers warned that the proliferation of IoT technology will create a new bevvy of challenges. Particularly troublesome will be IoT deployments at scale for both end-users and providers.” Not only was that sentence a mouthful, but it really addressed some of the challenges that can come building solutions and deployment of this exciting new technology area. The second sentiment in the article that stayed with me was that Security issues could grow.

So the article got me thinking, how can we create these cool IoT solutions using low-cost efficient microcontrollers with a secure operating system that can easily connect to the cloud. Luckily the answer came to me by way of an exciting new open-source based offering coming from AWS that I am happy to announce to you all today. Let’s all welcome, Amazon FreeRTOS to the technology stage.

Amazon FreeRTOS is an IoT microcontroller operating system operating system that simplifies development, security, deployment, and maintenance of microcontroller-based edge devices. Amazon FreeRTOS extends the FreeRTOS kernel, a popular real-time operating system, with libraries that enable local and cloud connectivity, security, and (coming soon) over-the-air updates.

So what are some of the great benefits of this new exciting offering, you ask. They are as follows:

  • Easily to create solutions for Low Power Connected Devices: provides a common operating system (OS) and libraries that make the development of common IoT capabilities easy for devices. For example; over-the-air (OTA) updates (coming soon) and device configuration.
  • Secure Data and Device Connections: devices only run trusted software using the Code Signing service, Amazon FreeRTOS provides a secure connection to the AWS using TLS, as well as, the ability to securely store keys and sensitive data on the device.
  • Extensive Ecosystem: contains an extensive hardware and technology ecosystem that allows you to choose a variety of qualified chipsets, including Texas Instruments, Microchip, NXP Semiconductors, and STMicroelectronics.
  • Cloud or Local Connections:  Devices can connect directly to the AWS Cloud or via AWS Greengrass.


What’s cool is that it is easy to get started. 

The Amazon FreeRTOS console allows you to select and download the software that you need for your solution.

There is a Qualification Program that helps to assure you that the microcontroller you choose will run consistently across several hardware options.

Finally, Amazon FreeRTOS kernel is an open-source FreeRTOS operating system that is freely available on GitHub for download.

But I couldn’t leave you without at least showing you a few snapshots of the Amazon FreeRTOS Console.

Within the Amazon FreeRTOS Console, I can select a predefined software configuration that I would like to use.

If I want to have a more customized software configuration, Amazon FreeRTOS allows you to customize a solution that is targeted for your use by adding or removing libraries.


Thanks for checking out the new Amazon FreeRTOS offering. To learn more go to the Amazon FreeRTOS product page or review the information provided about this exciting IoT device targeted operating system in the AWS documentation.

Can’t wait to see what great new IoT systems are will be enabled and created with it! Happy Coding.



Categories: Cloud

Presenting AWS IoT Analytics: Delivering IoT Analytics at Scale and Faster than Ever Before

AWS Blog - Wed, 11/29/2017 - 10:35

One of the technology areas I thoroughly enjoy is the Internet of Things (IoT). Even as a child I used to infuriate my parents by taking apart the toys they would purchase for me to see how they worked and if I could somehow put them back together. It seems somehow I was destined to end up the tough and ever-changing world of technology. Therefore, it’s no wonder that I am really enjoying learning and tinkering with IoT devices and technologies. It combines my love of development and software engineering with my curiosity around circuits, controllers, and other facets of the electrical engineering discipline; even though an electrical engineer I can not claim to be.

Despite all of the information that is collected by the deployment of IoT devices and solutions, I honestly never really thought about the need to analyze, search, and process this data until I came up against a scenario where it became of the utmost importance to be able to search and query through loads of sensory data for an anomaly occurrence. Of course, I understood the importance of analytics for businesses to make accurate decisions and predictions to drive the organization’s direction. But it didn’t occur to me initially, how important it was to make analytics an integral part of my IoT solutions. Well, I learned my lesson just in time because this re:Invent a service is launching to make it easier for anyone to process and analyze IoT messages and device data.


Hello, AWS IoT Analytics!  AWS IoT Analytics is a fully managed service of AWS IoT that provides advanced data analysis of data collected from your IoT devices.  With the AWS IoT Analytics service, you can process messages, gather and store large amounts of device data, as well as, query your data. Also, the new AWS IoT Analytics service feature integrates with Amazon Quicksight for visualization of your data and brings the power of machine learning through integration with Jupyter Notebooks.

Benefits of AWS IoT Analytics

  • Helps with predictive analysis of data by providing access to pre-built analytical functions
  • Provides ability to visualize analytical output from service
  • Provides tools to clean up data
  • Can help identify patterns in the gathered data

Be In the Know: IoT Analytics Concepts

  • Channel: archives the raw, unprocessed messages and collects data from MQTT topics.
  • Pipeline: consumes messages from channels and allows message processing.
    • Activities: perform transformations on your messages including filtering attributes and invoking lambda functions advanced processing.
  • Data Store: Used as a queryable repository for processed messages. Provide ability to have multiple datastores for messages coming from different devices or locations or filtered by message attributes.
  • Data Set: Data retrieval view from a data store, can be generated by a recurring schedule. 

Getting Started with AWS IoT Analytics

First, I’ll create a channel to receive incoming messages.  This channel can be used to ingest data sent to the channel via MQTT or messages directed from the Rules Engine. To create a channel, I’ll select the Channels menu option and then click the Create a channel button.

I’ll name my channel, TaraIoTAnalyticsID and give the Channel a MQTT topic filter of Temperature. To complete the creation of my channel, I will click the Create Channel button.

Now that I have my Channel created, I need to create a Data Store to receive and store the messages received on the Channel from my IoT device. Remember you can set up multiple Data Stores for more complex solution needs, but I’ll just create one Data Store for my example. I’ll select Data Stores from menu panel and click Create a data store.


I’ll name my Data Store, TaraDataStoreID, and once I click the Create the data store button and I would have successfully set up a Data Store to house messages coming from my Channel.

Now that I have my Channel and my Data Store, I will need to connect the two using a Pipeline. I’ll create a simple pipeline that just connects my Channel and Data Store, but you can create a more robust pipeline to process and filter messages by adding Pipeline activities like a Lambda activity.

To create a pipeline, I’ll select the Pipelines menu option and then click the Create a pipeline button.

I will not add an Attribute for this pipeline. So I will click Next button.

As we discussed there are additional pipeline activities that I can add to my pipeline for the processing and transformation of messages but I will keep my first pipeline simple and hit the Next button.

The final step in creating my pipeline is for me to select my previously created Data Store and click Create Pipeline.

All that is left for me to take advantage of the AWS IoT Analytics service is to create an IoT rule that sends data to an AWS IoT Analytics channel.  Wow, that was a super easy process to set up analytics for IoT devices.

If I wanted to create a Data Set as a result of queries run against my data for visualization with Amazon Quicksight or integrate with Jupyter Notebooks to perform more advanced analytical functions, I can choose the Analyze menu option to bring up the screens to create data sets and access the Juypter Notebook instances.


As you can see, it was a very simple process to set up the advanced data analysis for AWS IoT. With AWS IoT Analytics, you have the ability to collect, visualize, process, query and store large amounts of data generated from your AWS IoT connected device. Additionally, you can access the AWS IoT Analytics service in a myriad of different ways; the AWS Command Line Interface (AWS CLI), the AWS IoT API, language-specific AWS SDKs, and AWS IoT Device SDKs.

AWS IoT Analytics is available today for you to dig into the analysis of your IoT data. To learn more about AWS IoT and AWS IoT Analytics go to the AWS IoT Analytics product page and/or the AWS IoT documentation.


Categories: Cloud

In the Works – AWS IoT Device Defender- Secure Your IoT Fleet

AWS Blog - Wed, 11/29/2017 - 10:32

Scale takes on a whole new meaning when it comes to IoT. Last year I was lucky enough to tour a gigantic factory that had, on average, one environment sensor per square meter. The sensors measured temperature, humidity, and air purity several times per second, and served as an early warning system for contaminants. I’ve heard customers express interest in deploying IoT-enabled consumer devices in the millions or tens of millions.

With powerful, long-lived devices deployed in a geographically distributed fashion, managing security challenges is crucial. However, the limited amount of local compute power and memory can sometimes limit the ability to use encryption and other forms of data protection.

To address these challenges and to allow our customers to confidently deploy IoT devices at scale, we are working on IoT Device Defender. While the details might change before release, AWS IoT Device Defender is designed to offer these benefits:

Continuous Auditing – AWS IoT Device Defender monitors the policies related to your devices to ensure that the desired security settings are in place. It looks for drifts away from best practices and supports custom audit rules so that you can check for conditions that are specific to your deployment. For example, you could check to see if a compromised device has subscribed to sensor data from another device. You can run audits on a schedule or on an as-needed basis.

Real-Time Detection and Alerting – AWS IoT Device Defender looks for and quickly alerts you to unusual behavior that could be coming from a compromised device. It does this by monitoring the behavior of similar devices over time, looking for unauthorized access attempts, changes in connection patterns, and changes in traffic patterns (either inbound or outbound).

Fast Investigation and Mitigation – In the event that you get an alert that something unusual is happening, AWS IoT Device Defender gives you the tools, including contextual information, to help you to investigate and mitigate the problem. Device information, device statistics, diagnostic logs, and previous alerts are all at your fingertips. You have the option to reboot the device, revoke its permissions, reset it to factory defaults, or push a security fix.

Stay Tuned
I’ll have more info (and a hands-on post) as soon as possible, so stay tuned!


Categories: Cloud

New- AWS IoT Device Management

AWS Blog - Wed, 11/29/2017 - 10:30

AWS IoT and AWS Greengrass give you a solid foundation and programming environment for your IoT devices and applications.

The nature of IoT means that an at-scale device deployment often encompasses millions or even tens of millions of devices deployed at hundreds or thousands of locations. At that scale, treating each device individually is impossible. You need to be able to set up, monitor, update, and eventually retire devices in bulk, collective fashion while also retaining the flexibility to accommodate varying deployment configurations, device models, and so forth.

New AWS IoT Device Management
Today we are launching AWS IoT Device Management to help address this challenge. It will help you through each phase of the device lifecycle, from manufacturing to retirement. Here’s what you get:

Onboarding – Starting with devices in their as-manufactured state, you can control the provisioning workflow. You can use IoT Device Management templates to quickly onboard entire fleets of devices with a few clicks. The templates can include information about device certificates and access policies.

Organization – In order to deal with massive numbers of devices, AWS IoT Device Management extends the existing IoT Device Registry and allows you to create a hierarchical model of your fleet and to set policies on a hierarchical basis. You can drill-down through the hierarchy in order to locate individual devices. You can also query your fleet on attributes such as device type or firmware version.

Monitoring – Telemetry from the devices is used to gather real-time connection, authentication, and status metrics, which are published to Amazon CloudWatch. You can examine the metrics and locate outliers for further investigation. IoT Device Management lets you configure the log level for each device group, and you can also publish change events for the Registry and Jobs for monitoring purposes.

Remote Management – AWS IoT Device Management lets you remotely manage your devices. You can push new software and firmware to them, reset to factory defaults, reboot, and set up bulk updates at the desired velocity.

Exploring AWS IoT Device Management
The AWS IoT Device Management Console took me on a tour and pointed out how to access each of the features of the service:

I already have a large set of devices (pressure gauges):

These gauges were created using the new template-driven bulk registration feature. Here’s how I create a template:

The gauges are organized into groups (by US state in this case):

Here are the gauges in Colorado:

AWS IoT group policies allow you to control access to specific IoT resources and actions for all members of a group. The policies are structured very much like IAM policies, and can be created in the console:

Jobs are used to selectively update devices. Here’s how I create one:

As indicated by the Job type above, jobs can run either once or continuously. Here’s how I choose the devices to be updated:

I can create custom authorizers that make use of a Lambda function:

I’ve shown you a medium-sized subset of AWS IoT Device Management in this post. Check it out for yourself to learn more!



Categories: Cloud

Amazon Comprehend – Continuously Trained Natural Language Processing

AWS Blog - Wed, 11/29/2017 - 10:06

Many years ago I was wandering through the University of Maryland CS Library and found a dusty old book titled What Computers Can’t Do, adjacent to its successor, What Computers Still Can’t Do. The second book was thicker, which made me realize that Computer Science was a worthwhile field to study. While preparing to write this post I found an archive copy of the first book and found an interesting observation:

Since a human being using and understanding a sentence in a natural language requires an implicit knowledge of the sentence’s context-dependent use, the only way to make a computer that could understand and translate a natural language may well be, as Turing suspected, to program it to learn about the world.

This was a very prescient observation and I’d like to tell you about Amazon Comprehend, a new service that actually knows (and is very happy to share) quite a bit about the world!

Introducing Amazon Comprehend
Amazon Comprehend analyzes text and tells you what it finds, starting with the language, from Afrikans to Yoruba, with 98 more in between. It can identify different types of entities (people, places, brands, products, and so forth), key phrases, sentiment (positive, negative, mixed, or neutral), and extract key phrases, all from text in English or Spanish. Finally, Comprehend‘s topic modeling service extracts topics from large sets of documents for analysis or topic-based grouping.

The first four functions (language detection, entity categorization, sentiment analysis, and key phrase extraction) are designed for interactive use, with responses available in hundreds of milliseconds. Topic extraction works on a job-based model, with responses proportional to the size of the collection.

Comprehend is a continuously-trained trained Natural Language Processing (NLP) service. Our team of engineers and data scientists continue to extend and refine the training data, with the goal of making the service increasingly accurate and more broadly applicable over time.

Exploring Amazon Comprehend
You can explore Amazon Comprehend using the Console and then build applications that make use of the Comprehend APIs. I’ll use the opening paragraph from my recent post on Direct Connect to exercise the Amazon Comprehend API Explorer. I simply paste the text into the box and click on Analyze:

Comprehend processes the text at lightning speed, highlights the entities that it identifies (as you can see above), and makes all of the other information available at a click:

Let’s look at each part of the results. Comprehend can detect many categories of entities in the text that I supply:

Here are all of the entities that were found in my text (they can also be displayed in list or raw JSON form):

Here are the first key phrases (the rest are available by clicking Show all):

Language and sentiment are simple and straightforward:

Ok, so those are the interactive functions. Let’s take a look at the batch ones! I already have an S3 bucket that contains several thousand of my older blog posts, an empty one for my output, an IAM role that allows Comprehend to access both. I enter it and click on Create job to get started:

I can see my recent jobs in the Console:

The output appears in my bucket when the job is complete:

For demo purposes I can download the data and take a peek (in most cases I would feed it in to a visualization or analysis tool):

$ aws s3 ls s3://comp-out/348414629041-284ed5bdd23471b8539ed5db2e6ae1a7-1511638148578/output/ 2017-11-25 19:45:09 105308 output.tar.gz $ aws s3 cp s3://comp-out/348414629041-284ed5bdd23471b8539ed5db2e6ae1a7-1511638148578/output/output.tar.gz . download: s3://comp-out/348414629041-284ed5bdd23471b8539ed5db2e6ae1a7-1511638148578/output/output.tar.gz to ./output.tar.gz $ gzip -d output.tar.gz $ tar xf output.tar $ ls -l total 1020 -rw-r--r-- 1 ec2-user ec2-user 495454 Nov 25 19:45 doc-topics.csv -rw-rw-r-- 1 ec2-user ec2-user 522240 Nov 25 19:45 output.tar -rw-r--r-- 1 ec2-user ec2-user 20564 Nov 25 19:45 topic-terms.csv $

The topic-terms.csv file clusters related terms within a common topic number (first column). Here are the first 25 lines:

topic,term,weight 000,aw,0.0926182 000,week,0.0326755 000,announce,0.0268909 000,blog,0.0206818 000,happen,0.0143501 000,land,0.0140561 000,quick,0.0143148 000,stay,0.014145 000,tune,0.0140727 000,monday,0.0125666 001,cloud,0.0521465 001,quot,0.0292118 001,compute,0.0164334 001,aw,0.0245587 001,service,0.018017 001,web,0.0133253 001,video,0.00990734 001,security,0.00810732 001,enterprise,0.00626157 001,event,0.00566274 002,storage,0.0485621 002,datar,0.0279634 002,gateway,0.015391 002,s3,0.0218211

The doc-topics.csv file then indicates which files refer to the topics in the first file. Again, the first 25 lines:

docname,topic,proportion calillona_brows.html,015,0.577179 calillona_brows.html,062,0.129035 calillona_brows.html,003,0.128233 calillona_brows.html,071,0.125666 calillona_brows.html,076,0.039886 amazon-rds-now-supports-sql-server-2012.html,003,0.851638 amazon-rds-now-supports-sql-server-2012.html,059,0.061293 amazon-rds-now-supports-sql-server-2012.html,032,0.050921 amazon-rds-now-supports-sql-server-2012.html,063,0.036147 amazon-rds-support-for-ssl-connections.html,048,0.373476 amazon-rds-support-for-ssl-connections.html,005,0.197734 amazon-rds-support-for-ssl-connections.html,003,0.148681 amazon-rds-support-for-ssl-connections.html,032,0.113638 amazon-rds-support-for-ssl-connections.html,041,0.100379 amazon-rds-support-for-ssl-connections.html,004,0.066092 zipkeys_simplif.html,037,1.0 cover_art_appli.html,093,1.0 reverse-dns-for-ec2s-elastic-ip-addresses.html,040,0.359862 reverse-dns-for-ec2s-elastic-ip-addresses.html,048,0.254676 reverse-dns-for-ec2s-elastic-ip-addresses.html,042,0.237326 reverse-dns-for-ec2s-elastic-ip-addresses.html,056,0.085849 reverse-dns-for-ec2s-elastic-ip-addresses.html,020,0.062287 coming-soon-oracle-database-11g-on-amazon-rds-1.html,063,0.368438 coming-soon-oracle-database-11g-on-amazon-rds-1.html,041,0.193081

Building Applications with Amazon Comprehend
In most cases you will be using the Amazon Comprehend API to add natural language processing to your own applications. Here are the principal interactive functions:

DetectDominantLanguage – Detect the dominant language of the text. Some of the other functions require you to provide this information, so call this function first.

DetectEntities – Detect entities in the text and return them in JSON form.

DetectKeyPhrases – Detect key phrases in the text and return them in JSON form.

DetectSentiment – Detect the sentiment in the text and return POSITIVE, NEGATIVE, NEUTRAL, or MIXED.

There are also four variants of these functions (each prefixed with Batch) that can process up to 25 documents in parallel. You can use them to build high-throughput data processing pipelines.

Here are the functions that you can use to create and manage topic detection jobs:

StartTopicsDetectionJob – Create a job and start it running.

ListTopicsDetectionJobs – Get the list of current and recent jobs.

DescribeTopicsDetectionJob – Get detailed information about a single job.

Now Available
Amazon Comprehend is available now and you can start building applications with it today!


Categories: Cloud

Introducing Amazon Translate – Real-time Language Translation

AWS Blog - Wed, 11/29/2017 - 10:04

With the advent of the internet, the world has become a much smaller place. Loads of information can be stored and transmitted between cultures and countries within a blink of an eye, giving us all the ability to learn and grow from each other. In order for us to take advantage of all of these powerful vehicles of knowledge and data transfer, we must first break through some of the language barriers that may prevent information sharing and communication.

Outside of being multilingual, one of the ways we can break through these barriers is by leveraging machine translation and related technologies to translate between the languages. Machine translation technologies stem from the computational linguistics field of study that focuses on using software to translate text or speech from one language to another. The concept of machine translation dates back to 1949 when Warren Weaver, an American scientist and mathematician, created the Memorandum on Translation at the request of colleagues from the Division of Natural Sciences at the Rockefeller Foundation to share his language translation ideas. Since then, we have come a long way in the field of machine language translation by using neural networks to enhance the efficiency and quality of translation methods. It should, therefore, be no surprise that the field’s technical progression has led us to the exciting new service I want to introduce to you today.

Let’s Welcome: Amazon Translate

Join me in welcoming the Amazon Translate service to the Amazon Web Service family. Amazon Translate is a high-quality neural machine translation service that uses advanced machine learning technologies to provide fast language translation of text-based content and enable the development of applications that provide multilingual user experiences. The service is currently in preview and can be used to translate text to and from English and the supported languages.

With the Translate service, organizations and business now have the ability to expand products and services in other regions more easily by allowing consumers to access websites, information, and resources in their preferred language using automated language translations. In addition, customers can engage in multiplayer chats, gather information from consumer forums, dive into educational documents, and even obtain reviews about hotels even if those resources are provided in a language they can’t readily understand.

If you are like me, you may be curious about how Amazon Translate works to provide quality machine language translation. Based on deep learning technologies, Translate uses neural networks to represent models trained to translate between language pairs. The model consists of an encoder component which reads sentences from the source language and creates a representation that captures the meaning of the text provided. The model also has a decoder component that formulates a semantic representation used to generate a translation of the text from the source language to the target language. In addition, attention mechanisms are used by the service to build context from each word of the source text provided in order to decide which words are appropriate for generating the next target word. The concept of attention mechanisms in deep learning means that the neural network focuses on the relevant context of source inputs by taking into account the entire context of the source sentence, as well as everything it has generated previously. This process helps to create more accurate and fluent translations.

Amazon Translate can be used with other AWS services to build a robust multilingual experience or enable language-independent processing. For example, the Translate service can be used with some of the following services:

  • Amazon Polly: take translated text and provide lifelike speech and allow creation of applications that speak
  • Amazon S3: provides the ability to create translated document repositories
  • AWS Elasticsearch: create multi-language search using the managed Elasticsearch engine
  • Amazon Lex: build a translation chatbot using text and voice
  • AWS Lambda: enable localization of dynamic website content

These are just a few examples, but there are many possible solutions that can be enabled by pairing Translate with other AWS Services. Let’s take a quick look at the console and try out the service preview.

When I log into the console, I am presented with lots of great information. I can review information detailing how the Amazon Translate service works including examples, guidelines, and resources around the service and its API.

Since I am very excited to try out this new service, there is no time like the present. I’ll click Try Translate button and go into the API Explorer section of the service.

Since I believe I’m already pretty fluent in English, I’ll switch the language pair to have the Source Language as French (fr) and the Target Language as English (en). I’ll take some verbiage from the French-based hotel’s website I stayed in while in Belgium working a couple of weeks ago.

After pasting the French text from the website into the Translate service to translate it to English, I was pleasantly surprised to find that the translation was not only quick but accurate.


I am excited to have had the opportunity to provide to you all with an introduction of the new neural-machine translation service, Amazon Translate. With the service, you can translate text to and from English across the breadth of supported languages in real-time. The service is slated to be used directly via the AWS API, CLI, and/or supported SDKs.

Sign up for the Amazon Translate preview today and try the translation service. Learn more about the service by checking out the preview product page or reviewing the technical guides provided in the AWS documentation.


Categories: Cloud

Amazon Transcribe – Accurate Speech To Text At Scale

AWS Blog - Wed, 11/29/2017 - 10:02

Today we’re launching a private preview of Amazon Transcribe, an automatic speech recognition (ASR) service that makes it easy for developers to add speech to text capabilities to their applications. As bandwidth and connectivity improve, more and more of the world’s data is stored in video and audio formats. People are creating and consuming all of this data faster than ever before. It’s important for businesses to have some means of deriving value from all of that rich multimedia content. With Amazon Transcribe you can save on the costly process of manual transcription with an efficient and scalable API.

You can analyze audio files stored on Amazon Simple Storage Service (S3) in many common formats (WAV, MP3, Flac, etc.) by starting a job with the API. You’ll receive detailed and accurate transcriptions with timestamps for each word, as well as inferred punctuation. During the preview you can use the asynchronous transcription API to transcribe speech in English or Spanish.

Companies are looking to derive value from both their existing catalogs and their incoming data. By transcribing these stored media, companies can:

  • Analyze customer call data
  • Automate subtitle creation
  • Target advertising based on content
  • Enable rich search capabilities on archives of audio and video content

You can start a transcription job easily with the AWS Command Line Interface (CLI), AWS SDKs, or the Amazon Transcribe console.

Amazon Transcribe currently has 3, mostly self-explanatory, API Actions:

  • StartTranscriptionJob
  • GetTranscriptionJob
  • ListTranscriptionJobs

Here’s a quick python script that starts a job and polls until the job is finished:

from __future__ import print_function import time import boto3 transcribe = boto3.client('transcribe') job_name = "RandallTest1" job_uri = "https://s3-us-west-2.amazonaws.com/randhunt-transcribe-demos/test.flac" transcribe.start_transcription_job( TranscriptionJobName=job_name, Media={'MediaFileUri': job_uri}, MediaFormat='flac', LanguageCode='en-US', MediaSampleRateHertz=44100 ) while True: status = transcribe.get_transcription_job(TranscriptionJobName=job_name) if status['TranscriptionJob']['TranscriptionJobStatus'] in ['COMPLETED', 'FAILED']: break print("Not ready yet...") time.sleep(5) print(status)

The result of a completed job links to an Amazon Simple Storage Service (S3) presigned-url that contains our transcription in JSON format:

{ "jobName": "RandallTest1", "results": { "transcripts": [{"transcript": "Hello World", "confidence": 1}], "items": [ { "start_time": "0.880", "end_time": "1.300", "alternatives": [{"confidence": 0.91, "word": "Hello"}] }, { "start_time": "1.400", "end_time": "1.620", "alternatives": [{"confidence": 0.84, "word": "World"}] } ] }, "status": "COMPLETED" }

As you can see you get timestamps and confidence scores for each word.

Whether alone or combined with other Amazon AI services this is a powerful service and I can’t wait to see what our customers build with it!


You might have noticed this lends itself well to AWS Step Functions and I thought the same. Here’s a workflow I might use:

Categories: Cloud

Amazon Kinesis Video Streams – Serverless Video Ingestion and Storage for Vision-Enabled Apps

AWS Blog - Wed, 11/29/2017 - 10:00

Cell phones, security cameras, baby monitors, drones, webcams, dashboard cameras, and even satellites can all generate high-intensity, high-quality video streams. Homes, offices, factories, cities, streets, and highways are now host to massive numbers of cameras. They survey properties after floods and other natural disasters, increase public safety, let you know that your child is safe and sound, capture one-off moments for endless “fail” videos (a personal favorite), collect data that helps to identify and solve traffic problems, and more.

Dealing with this flood of video data can be challenging, to say the least. Incoming streams arrive unannounced, individually or by the millions. The stream contains valuable, real-time data that cannot be deferred, paused, or set aside to be dealt with at a more opportune time. Once you have the raw data, other challenges emerge. Storing, encrypting, and indexing the video data all come to mind. Extracting value—diving deep in to the content, understanding what’s there, and driving action—is the next big step.

New Amazon Kinesis Video Streams
Today I would like to introduce you to Amazon Kinesis Video Streams, the newest member of the Amazon Kinesis family of real-time streaming services. You now have the power to ingest streaming video (or other time-encoded data) from millions of camera devices without having to set up or run your own infrastructure. Kinesis Video Streams accepts your incoming streams, stores them durably and in encrypted form, creates time-based indexes, and enables the creation of vision-enabled applications. You can process the incoming streams using Amazon Rekognition Video, MXNet, TensorFlow OpenCV, or your own custom code, all in support of the the cool new robotics, analytics, and consumer apps that I know you will dream up.

We manage all of the infrastructure for you. First, you use our Producer SDK (device-side) to create an app and then send us video from the device of your choice. The incoming video arrives over a secure TLS connection and is stored in time-indexed form, after being encrypted with a AWS Key Management Service (KMS) key. Next, you use the Video Streams Parser Library (cloud-side) to consume the video stream and to extract value from it.

Regardless of how much you send – low resolution or high, from one device or from millions – Kinesis Video Streams, will scale to meet your needs. You can, as I never get tired of saying, focus on your application and on your business. Amazon Kinesis Video Streams builds on parts of AWS that you already know. It stores video in S3 for cost-effective durability, uses AWS Identity and Access Management (IAM) for access control, and is accessible from the AWS Management Console, AWS Command Line Interface (CLI), and through a set of APIs.

Amazon Kinesis Video Streams Concepts
Let’s run through a couple of concepts and then set up a stream.

Producer – A producer is a data source that puts data into a stream. It could be a baby monitor, a video camera on a drone, or something more exotic: perhaps a temperature sensor or a satellite! The Amazon Kinesis Video Producer SDK provides a set of functions that make it easy to establish a connection and to stream video.

Stream – A stream allows you to transport live video data, optionally store it, and make it available for real-time or batch consumption. Streams can also carry other types of time-encoded data including audio, radar, lidar, and sensor readings. In most cases, there’s a 1-to-1 mapping between producers and streams. Multiple independent applications can consume and process data from a single stream.

Fragment & Frames – A fragment is a time-bound set of individual frames from a stream.

Consumer – A consumer gets data (fragments or frames) from a stream and processes, analyzes, or displays it. Consumers can run in real-time or after the fact, and are built atop the Video Streams Parser Library.

Using Amazon Kinesis Video Streams
As I noted earlier, there’s a 1-to-1 mapping between producers and streams. In most cases, each instance of a producer will create a unique stream using the Kinesis Video Streams API. However, you can create streams manually for test or demo purposes, or if you need a small, fixed number of them.

To create a stream manually, I open up the Kinesis Video Streams Console and click on Create Kinesis video stream:

I simply enter the name of my stream and click on Create stream:

I can uncheck Use default settings if I want to customize my stream (most of the settings can be changed later):

My stream is ready for use immediately. The console will display video as soon as I start to stream it:

The Kinesis team shared this screen with me; I did not have time to take a field trip. Does that make me a Cheetah?

Developing for Amazon Kinesis Video Streams
The next step is to use the Producer SDK to build the producer app. The app runs on the device or out in the field, and is responsible for creating a stream and then posting a stream of fragments (each typically represent 2 to 10 seconds of video) to the stream by making calls to the PutMedia function.

The consumer side calls the GetMedia and GetMediaFromFragmentList functions to access content from the stream in Matroska (MKV) container format, and uses the included Video Streams Parser Library to extract the desired content. GetMedia is intended to be used continuous streaming with very low latency; GetMediaFromFragment list is batch-oriented and allows selective processing.

Now Available
Amazon Kinesis Video Streams is available in the US East (Northern Virginia), US West (Oregon), EU (Ireland), EU (Frankfurt), and Asia Pacific (Tokyo) Regions and you can start building your vision-enabled apps with it today.

Pricing is based on three factors: amount of video produced, amount of video consumed, and amount of video stored.


Categories: Cloud

Welcoming Amazon Rekognition Video: Deep-Learning Based Video Recognition

AWS Blog - Wed, 11/29/2017 - 09:56

It was this time last year during re:Invent 2016 that Jeff announced the Amazon Rekognition service launch.  I was so excited about getting my hands dirty and start coding against the service to build image recognition solutions. As you may know by now, Amazon Rekognition Image is a cloud service that uses deep learning to provide scalable image recognition and analysis. Amazon Rekognition Image enables you to build and integrate object and scene detection, real-time facial recognition, celebrity recognition, image moderation, as well as, text recognition into your applications and systems.

The Amazon Rekognition Image service was created by using deep learning neural network models and was based on the same technology that enables Prime Photos to analyze billions of images each day. At the time of Rekognition’s release, its primary focus was providing scalable, automated analysis, search, and classification of images.  Well that all changes today as I am excited to tell you about some additional features the service now has to offer.

Hello, Amazon Rekognition Video

Say hello to my new friend, Amazon Rekognition Video. Yes, of course, I started to use the Scarface movie reference and write “Say hello to my little friend”.  But since I didn’t say it, you must give me a little credit for not going completely corny. Now that that’s cleared up, let’s get back to discussing this exciting new AI service feature; Amazon Rekognition Video.


Amazon Rekognition Video is a new video analysis service feature that brings scalable computer vision analysis to your S3 stored video, as well as, live video streams. With Rekognition video, you can accurately detect, track, recognize, extract, and moderate thousands of objects, faces, and content from a video.  What I believe is even cooler about the new feature is that it not only provides accurate information about the objects within a video but it the first video analysis service of its kind that uses the complete context of visual, temporal, and motion of the video to perform activity detection and person tracking. Thereby using its deep-learning-based capabilities to derive more complete insights about what activities are being performed in the video. For example, this service feature can identify that there is a man, a car, and a tree in the video, as well as, deduce that the man in the video was running to the car. Pretty cool, right! Just imagine all of the possible scenarios that this functionality can provide to customers.

The process of conducting video analysis using the asynchronous Amazon Rekognition Video API is as follows:

  1. A Rekognition Video Start operation API is called on .mp4 or .mov video. Please note videos must be encoded with a H.264 codec. The Start operation APIs are as follows:
    • StartPersonTracking
    • StartFaceDetection
    • StartLabelDetection
    • StartCelebrityRecognition
    • StartContentModeration
  2. Amazon Rekognition Video processes video and publishes the completion status of the start operation API request to an Amazon SNS topic.
  3. You retrieve the notification of the API completion result by subscribing an Amazon SQS queue or AWS Lambda function to the SNS topic that you specify.
  4. Call the Get operation API associated with the start operation API that processed the video using the JobID provided in the SNS notification. The JobID is also provided to you as a part of the Start API response as well.The Get operation APIs are:
    • GetPersonTracking
    • GetFaceDetection
    • GetLabelDetection
    • GetCelebrityRecognition
    • GetContentModeration
  5. Retrieve the results of the video analysis via JSON returned from the Get operation API and a pagination token to the next set of results if applicable.

You can leverage the video analysis capabilities of Amazon Rekognition Video by using the AWS CLI, AWS SDKs, and/or REST APIs. I believe that there is no better way to learn about a new service than diving in and experiencing for yourself.  So let’s try it out!

I’ll start by uploading two music videos in .mp4 file format to my S3 bucket of songs in rotation on my playlist; Run by Foo Fighters and Wild Thoughts by DJ Khaled. Hey, what can I say, my musical tastes are broad and diverse.

I’ll create a SNS topic for notifications from Rekognition Video and a SQS queue to receive notifications from the SNS Topic.

Now I can subscribe my SQS Queue, RekognitionVideoQueue, to my SNS Topic, SNS-RekogntionVideo-Topic.

Now, I’ll use the AWS CLI to call the start-face-detection API operation on my video, DJ_Khaled-Wild_Thoughts.mp4, and obtain my JobId from the API response.

Once I have been notified that a message was received from the SNS Topic to my RekognitionVideoQueue SQS queue, and Status in that message is SUCCEEDED, I can call the get-face-detection API operation to get the results of the video analysis with the JobId.

I can, also, conduct video analysis on my other video, Foo_Fighters-Run.mp4, to obtain information about the object detected in the frames of the video by calling the start-label-detection and get-label-detection API operations.




Now with Rekognition Video, video captured with cell phones, cameras, IoT video sensors, and real-time live stream video processing can be used to create scalable, high accuracy video analytics solutions. This new deep-learning video feature will automate all the tasks necessary for detection of objects, faces, and activities in a video, and with the integration of other AWS Services, you can build robust media applications for varying workloads.

Learn more about Amazon Rekognition and the new Rekognition Video capability by checking out Getting Started section on the product page or the Rekognition developer guide in the AWS documentation.


Categories: Cloud

Amazon Neptune – A Fully Managed Graph Database Service

AWS Blog - Wed, 11/29/2017 - 08:57

Of all the data structures and algorithms we use to enable our modern lives, graphs are changing the world everyday. Businesses continuously create and ingest rich data with complex relationships. Yet developers are still forced to model these complex relationships in traditional databases. This leads to frustratingly complex queries with high costs and increasingly poor performance as you add relationships. We want to make it easy for you to deal with these modern and increasingly complex datasets, relationships, and patterns.

Hello, Amazon Neptune

Today we’re launching a limited preview of Amazon Neptune, a fast and reliable graph database service that makes it easy to gain insights from relationships among your highly connected datasets. The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds of latency. Delivered as a fully managed database, Amazon Neptune frees customers to focus on their applications rather than tedious undifferentiated operations like maintenance, patching, backups, and restores. The service supports fast-failover, point-in-time recovery, and Multi-AZ deployments for high availability. With support for up to 15 read replicas you can scale query throughput to 100s of thousands of queries per second. Amazon Neptune runs within your Amazon Virtual Private Cloud and allows you to encrypt your data at rest, giving you complete control over your data integrity in transit and at rest.

There are a lot of interesting features in this service but graph databases may be an unfamiliar topic for many of you so lets make sure we’re using the same vocabulary.

Graph Databases

A graph database is a store of vertices (nodes) and edges (relationships or connections) which can both have properties stored as key-value pairs. Graph databases are useful for connected, contextual, relationship-driven data. Some examples applications are social media networks, recommendation engines, driving directions, logistics, diagnostics, fraud detection, and genomic sequencing.

Amazon Neptune supports two open standards for describing and querying your graphs:

  • Apache TinkerPop3 style Property Graphs queried with Gremlin. Gremlin is a graph traversal language where a query is a traversal made up of discrete steps following an edge to a node. Your existing tools and clients that are designed to work with TinkerPop allow you to quickly get started with Neptune.
  • Resource Description Framework (RDF) queried with SPARQL. SPARQL is a declarative language based on Semantic Web standards from W3C. It follows a subject->predicate->object model. Specifically Neptune supports the following standards: RDF 1.1., SPARQL Query 1.1., SPARQL Update 1.1, and the SPARQL Protocol 1.1.

If you have existing applications that work with SPARQL or TinkerPop you should be able to start using Neptune by simply updating the endpoint your applications connect to.

Let’s walk through launching Amazon Neptune.

Launching Amazon Neptune

Start by navigating to the Neptune console then click “Launch Neptune” to start the launch wizard.

On this first screen you simply name your instance and select an instance type. Next we configure the advanced options. Many of these may look familiar to you if you’ve launched an instance-based AWS database service before, like Amazon Relational Database Service (RDS) or Amazon ElastiCache.

Amazon Neptune runs securely in your VPC and can create its own security group that you can add your EC2 instances to for easy-access.

Next, we are able to configure some additional options like the parameter group, port, and a cluster name.

On this next screen we can enable KMS based encryption-at-rest, failover priority, and a backup retention time.

Similar to RDS maintenance of the database can be handled by the service.

Once the instances are done provisioning you can find your connection endpoint on the Details page of the cluster. In my case it’s triton.cae1ofmxxhy7.us-east-1.rds.amazonaws.com.

Using Amazon Neptune

As stated above there are two different query engines that you can use with Amazon Neptune.

To connect to the gremlin endpoint you can use the endpoint with /gremlin to do something like:

curl -X POST -d '{"gremlin":"g.V()"}' https://your-neptune-endpoint:8182/gremlin

You can similarly connect to the SPARQL endpoint with /sparql

curl -G https://your-neptune-endpoint:8182/sparql --data-urlencode 'query=select ?s ?p ?o where {?s ?p ?o}'

Before we can query data we need to populate our database. Let’s imagine we’re modeling AWS re:Invent and use the bulk loading API to insert some data.
For Property Graph, Neptune supports CSVs stored in Amazon Simple Storage Service (S3) for loading node, node properties, edges, and edge properties.

A typical CSV for vertices looks like this:

~label,name,email,title,~id Attendee,George Harrison,george@thebeatles.com,Lead Guitarist,1 Attendee,John Lennon,john@thebeatles.com,Guitarist,2 Attendee,Paul McCartney,paul@thebeatles.com,Lead Vocalist,3

The edges CSV looks something like this:

~label,~from,~to ,~id attends,2,ARC307,attends22 attends,3,SRV422,attends27

Now to load a similarly structured CSV into Neptune we run something like this:

curl -H 'Content-Type: application/json' \ https://neptune-endpoint:8182/loader -d ' { "source": "s3://super-secret-reinvent-data/vertex.csv", "format": "csv", "region": "us-east-1", "accessKey": "AKIATHESEARENOTREAL", "secretKey": "ThEseARE+AlsoNotRea1K3YSl0l1234coVFefE12" }'

Which would return:

{ "status" : "200 OK", "payload" : { "loadId" : "2cafaa88-5cce-43c9-89cd-c1e68f4d0f53" } }

I could take that result and query the loading status: curl https://neptune-endpoint:8182/loader/2cafaa88-5cce-43c9-89cd-c1e68f4d0f53

{ "status" : "200 OK", "payload" : { "feedCount" : [{"LOAD_COMPLETED" : 1}], "overallStatus" : { "fullUri" : "s3://super-secret-reinvent-data/stuff.csv", "runNumber" : 1, "retryNumber" : 0, "status" : "LOAD_COMPLETED", "totalTimeSpent" : 1, "totalRecords" : 987, "totalDuplicates" : 0, "parsingErrors" : 0, "datatypeMismatchErrors" : 0, "insertErrors" : 0 } } }

For this particular data serialization format I’d repeat this loading process for my edges as well.

For RDF, Neptune supports four serializations: Turtle, N-Triples, N-Quads, and RDF/XML. I could load all of these through the same loading API.

Now that I have my data in my database I can run some queries. In Gremlin, we write our queries as Graph Traversals. I’m a big Paul McCartney fan so I want to find all of the sessions he’s attending:
g.V().has("name","Paul McCartney").out("attends").id()

This defines a graph traversal that finds all of the nodes that have the property “name” with the value “Paul McCartney” (there’s only one!). Next it follows all of the edges from that node that are of the type “attends” and gets the ids of the resulting nodes.

==>ENT332 ==>SRV422 ==>DVC201 ==>GPSBUS216 ==>ENT323

Paul looks like a busy guy.

Hopefully this gives you a brief overview of the capabilities of graph databases. Graph databases open up a new set of possibilities for a lot of customers and Amazon Neptune makes it easy to store and query your data at scale. I’m excited to see what amazing new products our customers build.


P.S. Major thanks to Brad Bebee and Divij Vaidya for helping to create this post!

Categories: Cloud

Amazon DynamoDB Update – Global Tables and On-Demand Backup

AWS Blog - Wed, 11/29/2017 - 08:52

AWS customers in a wide variety of industries use Amazon DynamoDB to store mission-critical data. Financial services, commerce, AdTech, IoT, and gaming applications (to name a few) make millions of requests per second to individual tables that contain hundreds of terabytes of data and trillions of items, and count on DynamoDB to return results in single-digit milliseconds.

Today we are introducing two powerful new features that I know you will love:

Global Tables – You can now create tables that are automatically replicated across two or more AWS Regions, with full support for multi-master writes, with a couple of clicks. This gives you the ability to build fast, massively scaled applications for a global user base without having to manage the replication process.

On-Demand Backup – You can now create full backups of your DynamoDB tables with a single click, and with zero impact on performance or availability. Your application remains online and runs at full speed. Backups are suitable for long-term retention and archival, and can help you to comply with regulatory requirements.

Global Tables
DynamoDB already replicates your tables across three Availability Zones to provide you with durable, highly available storage. Now you can use Global Tables to replicate your tables across two or more AWS Regions, setting it up with a couple of clicks. You get fast read and write performance that can scale to meet the needs of the most demanding global apps.

You do not need to make any changes to your existing code. You simply send write requests and eventually consistent read requests to a DynamoDB endpoint in any of the designated Regions (writes that are associated with strongly consistent reads should share a common endpoint). Behind the scenes, DynamoDB implements multi-master writes and ensures that the last write to a particular item prevails. When you use Global Tables, each item will include a timestamp attribute representing the time of the most recent write. Updates are propagated to other Regions asynchronously via DynamoDB Streams and are typically complete within one second (you can track this using the new ReplicationLatency and PendingReplicationCount metrics).

Getting started is simple. You create a table in the usual way, and then do one-click adds to arrange for replication to other Regions. You must start out with empty tables, all with the same name and key configuration (hash and optional sort). All of the tables should also share a consistent set of Auto Scaling, TTL, Local Secondary Index, Global Secondary Index, provisioned throughput settings, and IAM policies. For convenience, Auto Scaling is enabled automatically for new Global Tables.

If you are not using DynamoDB Auto Scaling, you should provision enough read capacity to deal with local reads along with enough write capacity to accommodate writes from all of the tables in the group and for an additional system write for each application write that originates in the local region. The system write is used to support the last-writer-wins model.

Let’s create a Global Table that spans three Regions. I create my table in the usual way and then click on the Global Tables tab:

DynamoDB checks the table to make sure that it meets the requirements, and indicates that I need to enable DynamoDB Streams, which I do. Now I click on Add region, chose EU (Frankfurt), and click on Continue:

The table is created in a matter of seconds:

I do this a second time and I now have a global table that spans three AWS Regions:

I create an item in EU (Ireland):

And it shows up in EU (Frankfurt) right away:

The cross-region replication process adds the aws:rep:updateregion and the aws:rep:updatetime attributes; they are visible to your application but should not be modified.

Global Tables are available in the US East (Northern Virginia), US East (Ohio), EU (Ireland), and EU (Frankfurt) Regions today, with more Regions in the works for 2018. You pay the usual DynamoDB prices for read capacity and storage, along with data transfer charges for cross-region replication. Write capacity is billed in terms of replicated write capacity units.

On-Demand Backup
This feature is designed to help you to comply with regulatory requirements for long-term archival and data retention. You can create a backup with a click (or an API call) without consuming your provisioned throughput capacity or impacting the responsiveness of your application. Backups are stored in a highly durable fashion and can be used to create fresh tables.

The DynamoDB Console now includes a Backups section:

I simply click on Create backup and give my backup a name:

The backup is available right away! It is encrypted with an Amazon-managed key and includes all of the table data, provisioned capacity settings, Local and Global Secondary Index settings, and Streams. It does not include Auto Scaling or TTL settings, tags, IAM policies, CloudWatch metrics, or CloudWatch Alarms.

You may be wondering how this operation can be instant, given that some of our customers have tables approaching half of a petabyte. Behind the scenes, DynamoDB takes full snapshots and saves all change logs. Taking a backup is as simple as saving a timestamp along with the current metadata for the table.

Here is my backup:

And here’s how I restore it to a new table:

Here are a couple of things to keep in mind about DynamoDB backups:

Setup – After you create a new table, DynamoDB has to do some setup work (basically enough time to eat lunch at your desk) before you can create your first backup.

Restoration – Restoration time depends on the size of the table, with times ranging from 30 minutes to several hours for very large tables.

Availability – We are rolling this new feature out on an account-by-account basis as quickly as possible, with initial availability in the US East (Northern Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) Regions.

Pricing – You pay for backup storage by the gigabyte-month and restore based on the amount of data that you restore.



Categories: Cloud

In The Works – Amazon Aurora Serverless

AWS Blog - Wed, 11/29/2017 - 08:47

You may already know about Amazon Aurora. Available in editions that are either MySQL-compatible or PostgreSQL-compatible, Aurora is fully-managed and automatically scales to up to 64 TB of database storage. When you create an Aurora Database Instance, you choose the desired instance size and have the option to increase read throughput using read replicas. If your processing needs or your query rate changes you have the option to modify the instance size or to alter the number of read replicas as needed. This model works really well in an environment where the workload is predictable, with bounds on the request rate and processing requirement.

In some cases the workloads can be intermittent and/or unpredictable, with bursts of requests that might span just a few minutes or hours per day or per week. Flash sales, infrequent or one-time events, online gaming, reporting workloads (hourly or daily), dev/test, and brand-new applications all fit the bill. Arranging to have just the right amount of capacity can be a lot work; paying for it on steady-state basis might not be sensible.

Get Ready for Amazon Aurora Serverless
Today we are launching a preview (sign up now) of Amazon Aurora Serverless. Designed for workloads that are highly variable and subject to rapid change, this new configuration allows you to pay for the database resources you use, on a second-by-second basis.

This serverless model builds on the clean separation of processing and storage that’s an intrinsic part of the Aurora architecture (read Design Considerations for High-Throughput Cloud-Native Relational Databases to learn more). Instead of choosing your database instance size up front, you create an endpoint, set the desired minimum and maximum capacity if you like, and issue queries to the endpoint. The endpoint is a simple proxy that routes your queries to a rapidly scaled fleet of database resources. This allows your connections to remain intact even as scaling operations take place behind the scenes. Scaling is rapid, with new resources coming online within 5 seconds. Here’s how it all fits together:

Because storage and processing are separate, you can scale all the way down to zero and pay only for storage. I think this is really cool, and I expect it to lead to the creations of new kinds of instant-on, transient applications. Scaling happens in seconds, building upon a pool of “warm” resources that are raring to go and eager to serve your requests. Special care is taken to build upon existing cached and buffered content so that newly added resources operate at full speed. You will be able to make your existing Aurora databases serverless with almost no effort.

Billing is based on Aurora Capacity Units, each representing a combination of compute power and memory. It is metered in 1-second increments, with a 1-minute minimum for each newly added resource.

Stay Tuned
I’ll be able to more information about Amazon Aurora Serverless in early 2018. Our current plan is to make it available in production form with MySQL compatibility in the first half, and to follow up with PostgreSQL compatibility later in the year. Today, you can sign up for the preview.


Categories: Cloud

Amazon Elastic Container Service for Kubernetes

AWS Blog - Wed, 11/29/2017 - 08:43

My colleague Deepak Singh has a lot to say about containers!


We have a lot of AWS customers who run Kubernetes on AWS. In fact, according to the Cloud Native Computing Foundation, 63% of Kubernetes workloads run on AWS. While AWS is a popular place to run Kubernetes, there’s still a lot of manual configuration that customers need to manage their Kubernetes clusters. You have to install and operate the Kubernetes master and configure a cluster of Kubernetes workers. In order to achieve high availability in you Kubernetes clusters, you have to run at least three Kubernetes masters across different AZs. Each master needs to be configured to talk to each, reliably share information, load balance, and failover to the other masters if one experiences a failure. Then once you have it all set up and running you still have to deal with upgrades and patches of the masters and workers software. This all requires a good deal of operational expertise and effort, and customers asked us to make this easier.

Introducing Amazon EKS
Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a fully managed service that makes it easy for you to use Kubernetes on AWS without having to be an expert in managing Kubernetes clusters. There are few things that we think developers will really like about this service. First, Amazon EKS runs the upstream version of the open-source Kubernetes software, so you can use all the existing plugins and tooling from the Kubernetes community. Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, whether running in on-premises datacenters or public clouds. This means that you can easily migrate your Kubernetes application to Amazon EKS with zero code changes. Second, Amazon EKS automatically runs K8s with three masters across three AZs to protect against a single point of failure. This multi-AZ architecture delivers resiliency against the loss of an AWS Availability Zone.

Third, Amazon EKS also automatically detects and replaces unhealthy masters, and it provides automated version upgrades and patching for the masters. Last, Amazon EKS is integrated with a number of key AWS features such as Elastic Load Balancing for load distribution, IAM for authentication, Amazon VPC for isolation, AWS PrivateLink for private network access, and AWS CloudTrail for logging.

How it Works
Now, let’s see how some of this works. Amazon EKS integrates IAM authentication with Kubernetes RBAC (the native role based access control system for Kubernetes) through a collaboration with Heptio.

You can assign RBAC roles directly to each IAM entity allowing you to granularly control access permissions to your Kubernetes masters. This allows you to easily manage your Kubernetes clusters using standard Kubernetes tools, such as kubectl.

You can also use PrivateLink if you want to access your Kubernetes masters directly from your own Amazon VPC. With PrivateLink, your Kubernetes masters and the Amazon EKS service endpoint appear as an elastic network interface with private IP addresses in your Amazon VPC.

This allows you to access the Kubernetes masters and the Amazon EKS service directly from within your own Amazon VPC, without using public IP addresses or requiring the traffic to traverse the internet.

Finally, we also built an open source CNI plugin that anyone can use with their Kubernetes clusters on AWS. This allows you to natively use Amazon VPC networking with your Kubernetes pods.

With Amazon EKS, launching a Kubernetes cluster is as easy as a few clicks in the AWS Management Console. Amazon EKS handles the rest, the upgrades, patching, and high availability. Amazon EKS is available in Preview. We look forward to hearing your feedback.

— Deepak Singh, General Manager of AWS Container Services

Categories: Cloud

Introducing AWS Fargate – Run Containers without Managing Infrastructure

AWS Blog - Wed, 11/29/2017 - 08:33

Containers are a powerful way for developers to develop, package, and deploy their applications. At AWS we have over a hundred thousand active ECS clusters and hundreds of millions of new containers started each week. That’s 400+% customer growth since 2016. Container orchestration solutions, like Amazon ECS and Kubernetes make it easier to deploy, manage, and scale these container workloads increasing your agility. However, with each of these container management solutions you’re still responsible for the availability, capacity, and maintenance of the underlying infrastructure. At AWS we saw this as an opportunity to remove some undifferentiated heavy lifting. We want to let you take full advantage of the speed, agility, and immutability that containers offer so you can focus on building your applications rather than managing your infrastructure.

AWS Fargate

AWS Fargate is an easy way to deploy your containers on AWS. To put it simply, Fargate is like EC2 but instead of giving you a virtual machine you get a container. It’s a technology that allows you to use containers as a fundamental compute primitive without having to manage the underlying instances. All you need to do is build your container image, specify the CPU and memory requirements, define your networking and IAM policies, and launch. With Fargate, you have flexible configuration options to closely match your application needs and you’re billed with per-second granularity.

The best part? You can still use all of the same ECS primitives, APIs, and AWS integrations. Fargate provides native integrations with Amazon Virtual Private Cloud, AWS Identity and Access Management (IAM), Amazon CloudWatch and load balancers. Fargate tasks use the AWSVPC networking mode and provision an Elastic Network Interface (ENI) in your VPC to communicate securely with your resources. With the AWS Command Line Interface (CLI) launching a Fargate task is simple.

ecs run-task --launch-type FARGATE --cluster BlogCluster --task-definition blog --network-configuration "awsvpcConfiguration={subnets=[subnet-b563fcd3]}"

It’s also easy to use the console to create task definitions and run tasks with the Fargate launch type.

Once we’ve launched a few tasks we can see them running in our cluster:

You’ll notice that ECS clusters are heterogeneous. They can contain tasks running in Fargate and on EC2.

If we dive a little deeper and look at a task we can see some useful information including the ENI that Fargate provisioned in our VPC and all of the containers used by that task. The logs tab gives me easy access to my CloudWatch Logs for that task as well.

Let’s take a look at the configuration options and pricing details for Fargate.


AWS Fargate uses an on-demand pricing model. You pay per per-second for the amount of vCPU and memory resources consumed by your applications. Price per vCPU is $0.00084333 per second ($0.0506 per hour) and per GB memory is $0.00021167 per second ($0.0127 per hour). With Fargate you have 50 configuration options for vCPU and Memory to support a wide range of workloads. The configuration options are below.


CPU (vCPU) Memory Values (GB) 0.25 0.5, 1, 2 0.5 1, 2, 3 1 Min. 2GB and Max. 8GB, in 1GB increments 2 Min. 4GB and Max. 16GB, in 1GB increments 4 Min. 8GB and Max. 30GB in 1GB increments Things To Know
  • You can configure Fargate to closely meet your application’s resource requirements and pay only for resources required by your containers. You can launch tens or tens of thousands of containers in seconds.
  • Fargate tasks run similarly to tasks running on EC2. You can add them to VPCs, configure load balancers, and assign IAM roles.
On the Roadmap

I won’t spill all the beans, but we have a really exciting roadmap for AWS Fargate. I will tell you that we plan to support launching containers on Fargate using Amazon EKS in 2018. As always, we love your feedback. Please leave a note in the Amazon ECS forum letting us know what you think.

Fargate is available today in the US East (Northern Virginia) region.


Categories: Cloud

H1 Instances – Fast, Dense Storage for Big Data Applications

AWS Blog - Tue, 11/28/2017 - 21:51

The scale of AWS and the diversity of our customer base gives us the opportunity to create EC2 instance types that are purpose-built for many different types of workloads. For example, a number of popular big data use cases depend on high-speed, sequential access to multiple terabytes of data. Our customers want to build and run very large MapReduce clusters, host distributed file systems, use Apache Kafka to process voluminous log files, and so forth.

New H1 Instances
The new H1 instances are designed specifically for this use case. In comparison to the existing D2 (dense storage) instances, the H1 instances provide more vCPUs and more memory per terabyte of local magnetic storage, along with increased network bandwidth, giving you the power to address more complex challenges with a nicely balanced mix of resources.

The instances are based on Intel Xeon E5-2686 v4 processors running at a base clock frequency of 2.3 GHz and come in four instance sizes (all VPC-only and HVM-only):

Instance Name vCPUs
Local Storage Network Bandwidth h1.2xlarge 8 32 GiB 2 TB Up to 10 Gbps h1.4xlarge 16 64 GiB 4 TB Up to 10 Gbps h1.8xlarge 32 128 GiB 8 TB 10 Gbps h1.16xlarge 64 256 GiB 16 TB 25 Gbps

The two largest sizes support Intel Turbo and CPU power management, with all-core Turbo at 2.7 GHz and single-core Turbo at 3.0 GHz.

Local storage is optimized to deliver high throughput for sequential I/O; you can expect to transfer up to 1.15 gigabytes per second if you use a 2 megabyte block size. The storage is encrypted at rest using 256-bit XTS-AES and one-time keys.

Moving large amounts of data on and off of these instances is facilitated by the use of Enhanced Networking, giving you up to 25 Gbps of network bandwith within Placement Groups.

Launch One Today
H1 instances are available today in the US East (Northern Virginia), US West (Oregon), US East (Ohio), and EU (Ireland) Regions. You can launch them in On-Demand or Spot Form. Dedicated Hosts, Dedicated Instances, and Reserved Instances (both 1-year and 3-year) are also available.


Categories: Cloud


Subscribe to LAMP, Database and Cloud Technical Information aggregator

Main menu 2

by Dr. Radut