Jump to Navigation

Feed aggregator

Amazon Comprehend Medical – Natural Language Processing for Healthcare Customers

AWS Blog - Tue, 11/27/2018 - 13:26

As the son of a Gastroenterologist and a Dermatologist, I grew up listening to arcane conversations involving a never-ending stream of complex medical terms: human anatomy, surgical procedures, medication names… and their abbreviations. A fascinating experience for a curious child wondering whether his parents were wizards of some sort and what all this gibberish meant.

For this reason, I am very happy to announce Amazon Comprehend Medical, an extension of Amazon Comprehend for healthcare customers.

A quick reminder on Amazon Comprehend

Amazon Comprehend was launched last year at AWS re:Invent. In a nutshell, this Natural Language Processing service provides simple real-time APIs for language detection, entity categorization, sentiment analysis, and key phrase extraction. In addition, it also lets you organize text documents automatically using an unsupervised learning technique called “topic modeling“.

Used by FINRA, LexisNexis, or Isentia, Amazon Comprehend can understand general-purpose text. However, given the very specific nature of clinical documents, healthcare customers have asked us to build them a version of Amazon Comprehend tailored to their unique needs.

Introducing Amazon Comprehend Medical

Amazon Comprehend Medical builds on top of Amazon Comprehend and adds the following features:

  • Support for entity extraction and entity traits on a vast vocabulary of medical terms: anatomy, conditions, procedures, medications, abbreviations, etc.
  • An entity extraction API (detect_entities) trained on these categories and their subtypes.
  • A Protected Health Information extraction API (detect_phi) able to locate contact details, medical record numbers, etc.

A word of caution: Amazon Comprehend Medical may not accurately identify protected health information in all circumstances, and does not meet the requirements for de-identification of protected health information under HIPAA. You are responsible for reviewing any output provided by Amazon Comprehend Medical to ensure it meets your needs.

Now let me show you how to get started with this new service. First, I’ll use the AWS Console and then I’ll run a simple Python example.

Using Amazon Comprehend Medical in the AWS Console

Opening the AWS Console, all we have to do is paste some text and click on the ‘Analyze’ button.

The document is processed immediately. Entities are extracted and highlighted: we see personal information in orange, medication in red, anatomy in purple and medical conditions in green.

Personal Identifiable Information is correctly picked up. This is particularly important for researchers who need to anonymize documents before exchanging or publishing them. Also, ‘rash’ and ‘sleeping trouble’ are correctly detected as medical conditions diagnosed by the doctor (‘Dx’ is shorthand for ‘diagnosis’). Medications are detected as well.

However, Amazon Comprehend Medical goes beyond the simple extraction of medical terms. It’s also able to understand complex relationships, such as the dosage for a medication or detailed diagnosis information. Here’s a nice example.

As you can see, Amazon Comprehend Medical is able to figure out abbreviations such as ‘po‘ and ‘qhs‘: the first one means that the medication should be taken orally and the second is an abbrevation for ‘quaque hora somni‘ (yes, it’s Latin), i.e. at bedtime.

Let’s now dive a little deeper and run a Python example.

Using Amazon Comprehend Medical with the AWS SDK for Python

First, let’s import the boto3 SDK and create a client for the service.

import boto3 comprehend = boto3.client(service_name='comprehendmedical')

Now let’s call the detect_entity API on a text sample and print the detected entities.

text = "Pt is 40yo mother, software engineer HPI : Sleeping trouble on present dosage of Clonidine. Severe Rash on face and leg, slightly itchy Meds : Vyvanse 50 mgs po at breakfast daily, Clonidine 0.2 mgs -- 1 and 1 / 2 tabs po qhs HEENT : Boggy inferior turbinates, No oropharyngeal lesion Lungs : clear Heart : Regular rhythm Skin : Papular mild erythematous eruption to hairline Follow-up as scheduled" result = comprehend.detect_entities(Text=text) entities = result['Entities'] for entity in entities: print(entity)

Take a look at this medication entity: it has three nested attributes (dosage, route and frequency) which add critically important context.

{u'Id': 3, u'Score': 0.9976208806037903, u'BeginOffset': 145, u'EndOffset': 152, u'Category': u'MEDICATION', u'Type': u'BRAND_NAME', u'Text': u'Vyvanse', u'Traits': [], u'Attributes': [ {u'Id': 4, u'Score': 0.9681360125541687, u'BeginOffset': 153, u'EndOffset': 159, u'Type': u'DOSAGE', u'Text': u'50 mgs', u'Traits': [] }, {u'Id': 5, u'Score': 0.99924635887146, u'BeginOffset': 160, u'EndOffset': 162, u'Type': u'ROUTE_OR_MODE', u'Text': u'po', u'Traits': [] }, {u'Id': 6, u'Score': 0.9738683700561523, u'BeginOffset': 163, u'EndOffset': 181, u'Type': u'FREQUENCY', u'Text': u'at breakfast daily', u'Traits': [] }] }

Here is another example. This medical condition entity is completed by a ‘negation’ trait, meaning that the condition was not detected, i.e. this patient doesn’t have any oropharyngeal lesion.

{u'Category': u'MEDICAL_CONDITION', u'Id': 16, u'Score': 0.9825472235679626, u'BeginOffset': 266, u'EndOffset': 286, u'Type': u'DX_NAME', u'Text': u'oropharyngeal lesion', u'Traits': [ {u'Score': 0.9701067209243774, u'Name': u'NEGATION'}, {u'Score': 0.9053299427032471, u'Name': u'SIGN'} ]}

The last feature I’d like to show you is extracting personal information with the detect_phi API.

result = comprehend.detect_phi(Text=text)
entities = result['Entities']
for entity in entities:
print(entity)

A couple of pieces of personal information are present in this text and we correctly extract them.

{u'Category': u'PERSONAL_IDENTIFIABLE_INFORMATION', u'BeginOffset': 6, u'EndOffset': 10, u'Text': u'40yo', u'Traits': [], u'Score': 0.997914731502533, u'Type': u'AGE', u'Id': 0} {u'Category': u'PERSONAL_IDENTIFIABLE_INFORMATION', u'BeginOffset': 19, u'EndOffset': 36, u'Text': u'software engineer', u'Traits': [], u'Score': 0.8865673542022705, u'Type': u'PROFESSION', u'Id': 1}

As you can see, Amazon Comprehend can help you extract complex information and relationships, while being extremely simple to use.

Once again, please keep in mind that Amazon Comprehend Medical is not a substitute for professional medical advice, diagnosis, or treatment. You definitely want to closely review any information it provides and use your own experience and judgement before taking any decision.

Now Available
I hope this post was informative. You can start building applications with Amazon Comprehend Medical today in the following regions: US East (Northern Virginia), US Central (Ohio), US West (Oregon) and Europe (Ireland).

In addition, the service is part of the AWS free tier: for three months after signup, the first 25,000 units of text (or 2.5 million characters) are free of charge.

Why don’t you try it on your latest prescription or medical exam and let us know what you think?

Julien;

 

 

Categories: Cloud

NEW – AWS Marketplace makes it easier to govern software procurement with Private Marketplace

AWS Blog - Tue, 11/27/2018 - 12:05

Over six years ago, we launched AWS Marketplace with the ambitious goal of providing users of the cloud with the software applications and infrastructure they needed to run their business. Today, more than 200,000 AWS active customers are using software from AWS Marketplace from categories such as security, data and analytics, log analysis and machine learning. Those customers use over 650 million hours a month of Amazon EC2 for products in AWS Marketplace and have more than 950,000 active software subscriptions. AWS Marketplace offers 35 categories and more than 4,500 software listings from more than 1,400 Independent Software Vendors (ISVs) to help you on your cloud journey, no matter what stage of adoption you are up to.

Customers have told us that they love the flexibility and myriad of options that AWS Marketplace provides. Today, I am excited to announce we are offering even more flexibility for AWS Marketplace with the launch of Private Marketplace from AWS Marketplace.

Private Marketplace is a new feature that enables you to create a custom digital catalog of pre-approved products from AWS Marketplace. As an administrator, you can select products that meet your procurement policies and make them available for your users. You can also further customize Private Marketplace with company branding, such as logo, messaging, and color scheme. All controls for Private Marketplace apply across your entire AWS Organizations entity, and you can define fine-grained controls using Identity and Access Management for roles such as: administrator, subscription manager and end user.

Once you enable Private Marketplace, users within your AWS Organizations redirect to Private Marketplace when they sign into AWS Marketplace. Now, your users can quickly find, buy, and deploy products knowing they are pre-approved.

 

Private Marketplace in Action

To get started we need to be using a master account, if you have a single account, it will automatically be classified as a master account. If you are a member of an AWS Organizations managed account, the master account will need to enable Private Marketplace access. Once done, you can add subscription managers and administrators through AWS Identity and Access Management (IAM) policies.

 

1- My account meets the requirement of being a master, I can proceed to create a Private Marketplace. I click “Create Private Marketplace” and am redirected to the admin page where I can whitelist products from AWS Marketplace. To grant other users access to approve products for listing, I can use AWS Organizations policies to grant the AWSMarketplaceManageSubscriptions role.

2- I select some popular software and operating systems from the list and add them to Private Marketplace. Once selected we can now see our whitelisted products.

3- One thing that I appreciate, and I am sure that the administrators of their organization’s Private Marketplace will, is some customization to bring the style and branding inline with the company. In this case, we can choose the name, logo, color, and description of our Private Marketplace.

4- After a couple of minutes we have our freshly minted Private Marketplace ready to go, there is an explicit step that we need to complete to push our Private Marketplace live. This allows us to create and edit without enabling access to users.

 

5 -For the next part, we will switch to a member account and see what our Private Marketplace looks like.

6- We can see the five pieces of software I whitelisted and our customizations to our Private Marketplace. We can also see that these products are “Approved for Procurement” and can be subscribed to by our end users. Other products are still discoverable by our users, but cannot be subscribed to until an administrator whitelists the product.

 

Conclusion

Users in a Private Marketplace can launch products knowing that all products in their Private Marketplace comply with their company’s procurement policies. When users search for products in Private Marketplace, they can see which products are labeled as “Approved for Procurement” and quickly filter between their company’s catalog and the full catalog of software products in AWS Marketplace.

 

Pricing and Availability

Subscription costs remain the same as all products in AWS Marketplace once consumed. Private Marketplace from AWS Marketplace is available in all commercial regions today.

 

 

 

Categories: Cloud

AWS Ground Station – Ingest and Process Data from Orbiting Satellites

AWS Blog - Tue, 11/27/2018 - 11:54

Did you know that there are currently thousands of satellites orbiting the Earth? I certainly did not, and would have guessed a few hundred at most. Today, high school and college students design, fabricate, and launch nano-, pico-, and even femto-satellites such as CubeSats, PocketQubes, and SunCubes. On the commercial side, organizations of any size can now launch satellites for Earth observation, communication, media distribution, and so forth.

All of these satellites collect a lot of data, and that’s where things get even more interesting. While it is now relatively cheap to get a satellite into Low Earth Orbit (LEO) or Medium Earth Orbit (MEO) and only slightly more expensive to achieve a more distant Geostationary Orbit, getting that data back to Earth is still more difficult than it should be. Large-scale satellite operators often build and run their own ground stations at a cost of up to one million dollars or more each; smaller operators enter into inflexible long-term contracts to make use of existing ground stations.

Some of the challenges that I reviewed above may remind you of those early, pre-cloud days when you had to build and run your own data center. That changed when we launched Amazon EC2 back in 2006.

Introducing AWS Ground Station
Today I would like to tell you about AWS Ground Station. Amazon EC2 made compute power accessible on a cost-effective, pay-as-you-go basis. AWS Ground Station does the same for satellite ground stations. Instead of building your own ground station or entering in to a long-term contract, you can make use of AWS Ground Station on an as-needed, pay-as-you-go basis. You can get access to a ground station on short notice in order to handle a special event: severe weather, a natural disaster, or something more positive such as a sporting event. If you need access to a ground station on a regular basis to capture Earth observations or distribute content world-wide, you can reserve capacity ahead of time and pay even less. AWS Ground Station is a fully managed service. You don’t need to build or maintain antennas, and can focus on your work or research.

We’re starting out with a pair of ground stations today, and will have 12 in operation by mid-2019. Each ground station is associated with a particular AWS Region; the raw analog data from the satellite is processed by our modem digitizer into a data stream (in what is formally known as VITA 49 baseband or VITA 49 RF over IP data streams) and routed to an EC2 instance that is responsible for doing the signal processing to turn it into a byte stream.

Once the data is in digital form, you have a host of streaming, processing, analytics, and storage options. Here’s a starter list:

StreamingAmazon Kinesis Data Streams to capture, process, and store data streams.

ProcessingAmazon Rekognition for image analysis; Amazon SageMaker to build, train, and deploy ML models.

Analytics / ReportingAmazon Redshift to store processed data in structured data warehouse form; Amazon Athena and Amazon QuickSight for queries.

StorageAmazon Simple Storage Service (S3) to store data in object form, with Amazon Glacier for long-term archival storage.

Your entire workflow, from the ground stations all the way through to processing, storage, reporting, and delivery, can now be done on elastic, pay-as-you-go infrastructure!

AWS Ground Station in Action
I did not have an actual satellite to test with, so the AWS Ground Station team created an imaginary one in my account! When you are ready to make use of AWS Ground Station, we’ll need your satellite’s NORAD ID, information about your FCC license, and your AWS account number so that we can associate it with your account.

I open the Ground Station Console and click Reserve contacts now to get started:

The first step is to reserve a contact (an upcoming time when my satellite will be in the optimal position to transmit to the ground station I choose). I choose a ground station from the menu:

I can filter based on status (Available, Scheduled, or Completed) and on a time range:

I can see the contacts, pick one that meets my requirements, select it, and click Reserve Contact:

I confirm my contact on the next page, and click Reserve:

Then I can filter the Contacts list to show all of my upcoming reservations:

After my contact has been reserved, I make sure that my EC2 instances will be running in the AWS Region associated with the ground station at least 15 minutes ahead of the start time. The instance responsible for the signal processing connects to an Elastic Network Interface (ENI), uses DataDefender to manage the data transfer, and routes the data to a software modem such as qRadio to convert it to digital form (we’ll provide customers with a CloudFormation template that will create the ENI and do all of the other setup work).

Things to Know
Here are a couple of things you should know about AWS Ground Station:

Access – Due to the nature of this service, access is not self-serve. You will need to communicate with our team in order to register your satellite(s).

Ground Stations – As I mentioned earlier, we are launching today with 2 ground stations, and will have a total of 12 in operation by 2019. We will monitor utilization and demand, and will build additional stations and antennas as needed.

Pricing – Pricing is per-minute of downlink time, with an option to pre-pay for blocks of minutes.

Jeff;

 

Categories: Cloud

AWS Launches, Previews, and Pre-Announcements at re:Invent 2018 – Monday Night Live

AWS Blog - Mon, 11/26/2018 - 21:03

As promised in Welcome to AWS re:Invent 2018, here’s a summary of the launches, previews, and pre-announcements from Monday Night Live!

Launches
Here are detailed blog posts & whats new pages for tonight’s launches:

P3dn Instances
The upcoming p3dn.24xlarge instances will feature 100 Gbps network bandwidth, local NVMe storage, and eight of the latest NVIDIA TESLA v100 GPUs (a total of 256 GB of GPU memory). With 2x the GPU memory and 1.5x as many vCPUs as p3.16xlarge instances, these instances will allow you to explore bigger and more complex deep learning algorithms, render 3D images, transcode video in real time, model financial risks, and much more.

Elastic Fabric Adapter
This is a new network interface for EC2 instances. It is designed to support High Performance Computing (HPC) AWS workloads that need lots of inter-node communication: computational fluid dynamics, weather modeling, reservoir simulation, and the like. EFA will support the industry-standard Message Passing Interface (MPI) so that you can bring your existing HPC applications to AWS without changing any code. Sign up for the preview.

AWS IoT Events
This service monitors IoT sensors at scale, looking for anomalies, trends, and patterns that can indicate a systemic failure, production slowdown, or a change in operation. It triggers pre-defined actions and generates alerts to on-site teams when something is amiss. Sign up for the preview.

AWS IoT SiteWise
This service helps our industrial customers to collect, structure, and search thousands of sensor data streams across multiple facilities. An on-premises gateway device collects data from OPC-UA servers and forwards it to AWS for further processing. The data can be used to build visual representations of production lines and processes, and is used in conjunction with AWS IoT Analytics to forecast trends. Sign up for the preview.

AWS IoT Things Graph
This service will make it even easier for you to rapidly build IoT applications for edge gateways that run AWS IoT Greengrass. You will be able to connect devices and web services, even if the devices are from a variety of vendors and speak different protocols. Sign up for the preview.

Stay Tuned
I am looking forward to writing about each of these services when they are ready to launch, so stay tuned!

Jeff;

Categories: Cloud

Firecracker – Lightweight Virtualization for Serverless Computing

AWS Blog - Mon, 11/26/2018 - 21:00

One of my favorite Amazon Leadership Principles is Customer Obsession. When we launched AWS Lambda, we focused on giving developers a secure serverless experience so that they could avoid managing infrastructure. In order to attain the desired level of isolation we used dedicated EC2 instances for each customer. This approach allowed us to meet our security goals but forced us to make some tradeoffs with respect to the way that we managed Lambda behind the scenes. Also, as is the case with any new AWS service, we did not know how customers would put Lambda to use or even what they would think of the entire serverless model. Our plan was to focus on delivering a great customer experience while making the backend ever-more efficient over time.

Just four years later (Lambda was launched at re:Invent 2014) it is clear that the serverless model is here to stay. Today, Lambda processes trillions of executions for hundreds of thousands of active customers every month. Last year we extended the benefits of serverless to containers with the launch of AWS Fargate, which now runs tens of millions of containers for AWS customers every week.

As our customers increasingly adopted serverless, it was time to revisit the efficiency issue. Taking our Invent and Simplify principle to heart, we asked ourselves what a virtual machine would look like if it was designed for today’s world of containers and functions!

Introducing Firecracker
Today I would like to tell you about Firecracker, a new virtualization technology that makes use of KVM. You can launch lightweight micro-virtual machines (microVMs) in non-virtualized environments in a fraction of a second, taking advantage of the security and workload isolation provided by traditional VMs and the resource efficiency that comes along with containers.

Here’s what you need to know about Firecracker:

Secure – This is always our top priority! Firecracker uses multiple levels of isolation and protection, and exposes a minimal attack surface.

High Performance – You can launch a microVM in as little as 125 ms today (and even faster in 2019), making it ideal for many types of workloads, including those that are transient or short-lived.

Battle-Tested – Firecracker has been battled-tested and is already powering multiple high-volume AWS services including AWS Lambda and AWS Fargate.

Low Overhead – Firecracker consumes about 5 MiB of memory per microVM. You can run thousands of secure VMs with widely varying vCPU and memory configurations on the same instance.

Open Source – Firecracker is an active open source project. We are already ready to review and accept pull requests, and look forward to collaborating with contributors from all over the world.

Firecracker was built in a minimalist fashion. We started with crosvm and set up a minimal device model in order to reduce overhead and to enable secure multi-tenancy. Firecracker is written in Rust, a modern programming language that guarantees thread safety and prevents many types of buffer overrun errors that can lead to security vulnerabilities.

Firecracker Security
As I mentioned earlier, Firecracker incorporates a host of security features! Here’s a partial list:

Simple Guest Model – Firecracker guests are presented with a very simple virtualized device model in order to minimize the attack surface: a network device, a block I/O device, a Programmable Interval Timer, the KVM clock, a serial console, and a partial keyboard (just enough to allow the VM to be reset).

Process Jail – The Firecracker process is jailed using cgroups and seccomp BPF, and has access to a small, tightly controlled list of system calls.

Static Linking – The firecracker process is statically linked, and can be launched from a jailer to ensure that the host environment is as safe and clean as possible.

Firecracker in Action
To get some experience with Firecracker, I launch an i3.metal instance and download three files (the firecracker binary, a root file system image, and a Linux kernel):

I need to set up the proper permission to access /dev/kvm:

$ sudo setfacl -m u:${USER}:rw /dev/kvm

I start firecracker in one PuTTY session, and then issue commands in another (the process listens on a Unix-domain socket and implements a REST API). The first command sets the configuration for my first guest machine:

$ curl --unix-socket /tmp/firecracker.sock -i \ -X PUT "http://localhost/machine-config" \ -H "accept: application/json" \ -H "Content-Type: application/json" \ -d "{ \"vcpu_count\": 1, \"mem_size_mib\": 512 }"

And, the second sets the guest kernel:

$ curl --unix-socket /tmp/firecracker.sock -i \ -X PUT "http://localhost/boot-source" \ -H "accept: application/json" \ -H "Content-Type: application/json" \ -d "{ \"kernel_image_path\": \"./hello-vmlinux.bin\", \"boot_args\": \"console=ttyS0 reboot=k panic=1 pci=off\" }"

And, the third one sets the root file system:

$ curl --unix-socket /tmp/firecracker.sock -i \ -X PUT "http://localhost/drives/rootfs" \ -H "accept: application/json" \ -H "Content-Type: application/json" \ -d "{ \"drive_id\": \"rootfs\", \"path_on_host\": \"./hello-rootfs.ext4\", \"is_root_device\": true, \"is_read_only\": false }"

With everything set to go, I can launch a guest machine:

# curl --unix-socket /tmp/firecracker.sock -i \ -X PUT "http://localhost/actions" \ -H "accept: application/json" \ -H "Content-Type: application/json" \ -d "{ \"action_type\": \"InstanceStart\" }"

And I am up and running with my first VM:

In a real-world scenario I would script or program all of my interactions with Firecracker, and I would probably spend more time setting up the networking and the other I/O. But re:Invent awaits and I have a lot more to do, so I will leave that part as an exercise for you.

Collaborate with Us
As you can see this is a giant leap forward, but it is just a first step. The team is looking forward to telling you more, and to working with you to move ahead. Star the repo, join the community, and send us some code!

Jeff;

 

 

 

 

Categories: Cloud

New C5n Instances with 100 Gbps Networking

AWS Blog - Mon, 11/26/2018 - 20:26

We launched the powerful, compute-intensive C5 instances last year, and followed up with the C5d instances earlier this year with the addition of local NVMe storage. Both instances are built on the AWS Nitro system and are powered by AWS-custom 3.0 Ghz Intel® Xeon® Platinum 8000 series processors. They are designed for compute-heavy applications such as batch processing, distributed analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.

New 100 Gbps Networking
Today we are adding an even more powerful variant, the C5n instance. With up to 100 Gbps of network bandwidth, your simulations, in-memory caches, data lakes, and other communication-intensive applications will run better than ever. Here are the specs:

Instance Name vCPUs
RAM
EBS Bandwidth Network Bandwidth c5n.large 2 5.25 GiB Up to 3.5 Gbps Up to 25 Gbps c5n.xlarge 4 10.5 GiB Up to 3.5 Gbps Up to 25 Gbps c5n.2xlarge 8 21 GiB Up to 3.5 Gbps Up to 25 Gbps c5n.4xlarge 16 42 GiB 3.5 Gbps Up to 25 Gbps c5n.9xlarge 36 96 GiB 7 Gbps 50 Gbps c5n.18xlarge 72 192 GiB 14 Gbps 100 Gbps

The Nitro Hypervisor allows the full range of C5n instances to deliver performance that is just about indistinguishable from bare metal. Other AWS Nitro System components, including the Nitro Security Chip, hardware EBS processing, and hardware support for the software defined network inside of each VPC also enhance performance.

Each vCPU is a hardware hyperthread on the Intel Xeon Platinum 8000 series processor. You get full control over the C-states on the two largest sizes, allowing you to run a single core at up to 3.5 Ghz using Intel Turbo Boost Technology.

The new instances also feature a higher amount of memory per core, putting them in the current “sweet spot” for HPC applications that work most efficiently when there’s at least 4 GiB of memory for each core. The instances also benefit from some internal improvements that boost memory access speed by up to 19% in comparison to the C5 and C5d instances.

It’s All About the Network
Now let’s get to the big news!

The C5n instances incorporate the fourth generation of our custom Nitro hardware, allowing the high-end instances to provide up to 100 Gbps of network throughput, along with a higher ceiling on packets per second. The Elastic Network Interface (ENI) on the C5n uses up to 32 queues (in comparison to 8 on the C5 and C5d), allowing the packet processing workload to be better distributed across all available vCPUs. The ability to push more packets per second will make these instances a great fit for network appliances such as firewalls, routers, and 5G cellular infrastructure.

In order to make the most of the available network bandwidth, you need to be using the latest Elastic Network Adapter (ENA) drivers (available in the latest Amazon Linux, Red Hat 7.6, and Ubuntu AMIs, and in the upstream Linux kernel) and you need to make use of multiple traffic flows. Flows within a Placement Group can reach 10 Gbps; the rest can reach 5 Gbps. When using multiple flows on the high-end instances, you can transfer 100 Gbps between EC2 instances in the same region (within or across AZs), S3 buckets, and AWS services such as Amazon Relational Database Service (RDS), Amazon ElastiCache, and Amazon EMR.

Available Now
C5n instances are available now in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and AWS GovCloud (US-West) Regions and you can launch one (or an entire cluster of them) today in On-Demand, Reserved Instance, Spot, Dedicated Host, or Dedicated Instance form.

Jeff;

Categories: Cloud

New – EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors

AWS Blog - Mon, 11/26/2018 - 20:13

Earlier this year I told you about the AWS Nitro System and promised you that it would allow us to “deliver new instance types more quickly than ever in the months to come.” Since I made that promise we have launched memory-intensive R5 and R5d instances, high frequency z1d instances, burstable T3 instances, high memory instances with up to 12 TiB of memory, and AMD-powered M5a and R5a instances. The purpose-built hardware and the lightweight hypervisor that comprise the AWS Nitro System allow us to innovate more quickly while devoting virtually all of the power of the host hardware to the instances.

We acquired Annapurna Labs in 2015 after working with them on the first version of the AWS Nitro System. Since then we’ve worked with them to build and release two generations of ASICs (chips, not shoes) that now offload all EC2 system functions to Nitro, allowing 100% of the hardware to be devoted to customer instances. A few years ago the team started to think about building an Amazon-built custom CPU designed for cost-sensitive scale-out workloads.

AWS Graviton Processors
Today we are launching EC2 instances powered by Arm-based AWS Graviton Processors. Built around Arm cores and making extensive use of custom-built silicon, the A1 instances are optimized for performance and cost. They are a great fit for scale-out workloads where you can share the load across a group of smaller instances. This includes containerized microservices, web servers, development environments, and caching fleets.

The A1 instances are available in five sizes, all EBS-Optimized by default, at a significantly lower cost:

Instance Name vCPUs
RAM
EBS Bandwidth Network Bandwidth On-Demand Price/Hour
US East (N. Virginia)
a1.medium 1 2 GiB Up to 3.5 Gbps Up to 10 Gbps $0.0255 a1.large 2 4 GiB Up to 3.5 Gbps Up to 10 Gbps $0.0510 a1.xlarge 4 8 GiB Up to 3.5 Gbps Up to 10 Gbps $0.1020 a1.2xlarge 8 16 GiB Up to 3.5 Gbps Up to 10 Gbps $0.2040 a1.4xlarge 16 32 GiB 3.5 Gbps Up to 10 Gbps $0.4080

If your application is written in a scripting language, odds are that you can simply move it over to an A1 instance and run it as-is. If your application compiles down to native code, you will need to rebuild it on an A1 instance.

I was lucky enough to get early access to some A1 instances in order to put them through their paces. The console was still under construction so I launched my first instance from the command line:

$ aws ec2 run-instances --image-id ami-036237e941dccd50e \ --instance-type a1.medium --count 1 --key-name keys-jbarr-us-east

The instance is up and running in seconds; I can run uname to check the architecture:

AMIs are available for Amazon Linux 2, RHEL, and Ubuntu today, with additional operating system support on the way.

Available Now
The A1 instances are available now in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions in On-Demand, Reserved Instance, Spot, Dedicated Instance, and Dedicated Host form and you can launch them today.

Jeff;

 

Categories: Cloud

New – Use an AWS Transit Gateway to Simplify Your Network Architecture

AWS Blog - Mon, 11/26/2018 - 19:50

It is safe to say that Amazon Virtual Private Cloud is one of the most useful and central features of AWS. Our customers configure their VPCs in a wide variety of ways, and take advantage of numerous connectivity options and gateways including AWS Direct Connect (via Direct Connect Gateways), NAT Gateways, Internet Gateways, Egress-Only Internet Gateways, VPC Peering, AWS Managed VPN Connections, and PrivateLink.

Because our customers benefit from the isolation and access control that they can achieve using VPCs, subnets, route tables, security groups, and network ACLs, they create a lot of them. It is not uncommon to find customers with hundreds of VPCs distributed across AWS accounts and regions in order to serve multiple lines of business, teams, projects, and so forth.

Things get a bit more complex when our customers start to set up connectivity between their VPCs. All of the connectivity options that I listed above are strictly point-to-point, so the number of VPC-to-VPC connections grows quickly.

New AWS Transit Gateway
Today we are giving you the ability to use the new AWS Transit Gateway to build a hub-and-spoke network topology. You can connect your existing VPCs, data centers, remote offices, and remote gateways to a managed Transit Gateway, with full control over network routing and security, even if your VPCs, Active Directories, shared services, and other resources span multiple AWS accounts. You can simplify your overall network architecture, reduce operational overhead, and gain the ability to centrally manage crucial aspects of your external connectivity, including security. Last but not least, you can use Transit Gateways to consolidate your existing edge connectivity and route it through a single ingress/egress point.

Transit Gateways are easy to set up and to use, and are designed to be highly scalable and resilient. You can attach up to 5000 VPCs to each gateway and each attachment can handle up to 50 Gbits/second of bursty traffic. You can attach your AWS VPN connections to a Transit Gateway today, with Direct Connect planned for early 2019.

Creating a Transit Gateway
This new feature makes use of the new AWS Resource Manager, a new AWS service that makes it really easy for you to share AWS resources across accounts. An account that owns a resource simply creates a Resource Share and specifies a list of other AWS accounts that can access the resource. Transit Gateways are one of the first resource types that you can share in this fashion, with many others on the roadmap. I’ll have a lot more to say about this in the future; for now think of it as separating the concepts of ownership and access for a given AWS resource.

The first step is to create a Transit Gateway in my AWS account. I open up the VPC Console (CLI, API, and CloudFormation support is also available), select Transit Gateways and click Create Transit Gateway to get started. I enter a name and a description, an ASN for the Amazon side of the gateway, and can choose to automatically accept sharing requests from other accounts:

My gateway is available within minutes:

Now I can attach it to a VPC and select the subnet where I have workloads (at most one subnet per AZ):

After that, I head over to the Resource Access Manager Console and click Create a resource share:

I name my share, find and add my Transit Gateway to it, and add the AWS accounts that I want to share it with. I can also choose to share it with an Organization or an Organizational Unit (OU). I make my choices and click Create resource share to proceed:

My resource share is created and ready to go within a few minutes:

Next, I log in to the accounts that I shared the gateway with, head back to the RAM Console, locate the invitation, and click it:

I confirm that I am accepting the desired invitation, and click Accept resource share:

I confirm my intent, and then I can see the Transit Gateway as a resource that is shared with me:

The next step (not shown) is to attach it to the desired VPCs in this account!

As you can see, you can use Transit Gateways to simplify your networking model. You can easily build applications that span multiple VPCs and you can share network services across them without having to manage a complex network. For example, you can go from this:

To this:

You can also connect a Transit Gateway to a firewall or an IPS (Intrusiong Prevention System) and create a single VPC that handles all ingress and egress traffic for your network.

Things to Know
Here are a couple of other things that you should know about VPC Transit Gateways:

AWS Integration – The Transit Gateways publish metrics to Amazon CloudWatch and also generate VPC Flow Logs records.

VPN ECMP Support -You can enable Equal-Cost Multi-Path (ECMP) support on your VPN connections. If the connections advertise the same CIDR blocks, traffic will be distributed equally across them.

Routing Domains – You can use multiple route tables on the same Transit Gateway and use them to control routing on a per-attachment basis. You can isolate VPC traffic or divert traffic from certain VPCs to a separate inspection domain.

Security – You can use VPC security groups and network ACLs to control the flow of traffic between your on-premises networks.

Pricing – You pay a per-hour fee for each hour that a Transit Gateway is attached, along with a per-GB data processing fee.

Direct Connect – We are working on support for AWS Direct Connect.

Available Now
AWS Transit Gateways are available now and you can start using them today!

Jeff;

 

Categories: Cloud

New – AWS Global Accelerator for Availability and Performance

AWS Blog - Mon, 11/26/2018 - 19:48

Having previously worked in an area where regulation required us to segregate user data by geography and abide by data sovereignty laws, I can attest to the complexity of running global workloads that need infrastructure deployed in multiple countries. Availability, performance, and failover all become a yak shave as you expand past your original data center. Customers have told us that they need to run in multiple regions, whether it is for availability, performance or regulation. They love that they can template their workloads through AWS CloudFormation, replicate their databases with Amazon DynamoDB Global Tables and deploy serverless workloads with AWS SAM. All of these capabilities can be executed in minutes and provide a global experience for your audience. Customers have also told us that they love the regional isolation that AWS provides to reduce blast radius and increase availability, but they would like some help with stitching together other parts of their applications.

 

Introducing AWS Global Accelerator

That’s why I am pleased to announce AWS Global Accelerator, a network service that enables organizations to seamlessly route traffic to multiple regions and improve availability and performance for their end users. AWS Global Accelerator uses AWS’s vast, highly available and congestion-free global network to direct internet traffic from your users to your applications running in AWS regions. With AWS Global Accelerator, your users are directed to your workload based on their geographic location, application health, and weights that you can configure. AWS Global Accelerator also allocates static Anycast IP addresses that are globally unique for your application and do not change, thus removing the need to update clients as your application scales. You can get started by provisioning your Accelerator and associating it with your applications running on: Network Load Balancers, Application Load Balancers, or Elastic IP addresses. AWS Global Accelerator then allocates two static Anycast IP addresses from the AWS network which serve as an entry point for your workloads. AWS Global Accelerator supports both TCP and UDP protocols, health checking of your target endpoints and will route traffic away from unhealthy applications. You can use an Accelerator in one or more AWS regions, providing increased availability and performance for your end users. Low-latency applications typically used by media, financial, and gaming organizations will benefit from Accelerator’s use of the AWS global network and optimizations between users and the edge network.

Image 1 – How it Works

Here’s what you need to know:

Static Anycast IPs – Global Accelerator uses Static IP addresses that serve as a fixed entry point to your applications hosted in any number of AWS Regions. These IP addresses are Anycast from AWS edge locations, meaning that these IP addresses are announced from multiple AWS edge locations, enabling traffic to ingress onto the AWS global network as close to your users as possible. You can associate these addresses to regional AWS resources or endpoints, such as Network Load Balancers, Application Load Balancers, and Elastic IP addresses. You don’t need to make any client-facing changes or update DNS records as you modify or replace endpoints. An Accelerator’s IP addresses are static and will serve as the front door for your user-facing applications.

AWS’s Global Network – Traffic routed through Accelerator traverses the well monitored, congestion free, redundant AWS global network (instead of the public internet). Clients route to the optimal region based on client location, health-checks, and configured weights. Traffic will enter through an AWS edge location that is advertising an Accelerator’s Anycast IP addresses, from where the request will be routed through an optimized path towards the application.

Client State – AWS Global Accelerator enables you to build applications that keep state as an essential requirement. Stateful applications route users to the same endpoint, after their initial connection. Global Accelerator achieves this through setting the SourceIP of the client requester as the identifier for maintaining state, irrespective of the port and protocol.

 

AWS Global Accelerator in Action

To get familiar with AWS Global Accelerator’s features I am going to use two EC2 hosted WordPress deployments that are behind an Application Load Balancer. To test the global nature of AWS Global Accelerator, I have deployed our application to Singapore and Tokyo regions. Image 3 illustrates our happy path. Traffic is sent from our client to the nearest edge location via the two Anycast IP address that the edge location is advertising. Our request routes through the AWS global network to the Accelerator which selects the closest healthy endpoint group. An Application Load Balancer terminates our request and passes it to the WordPress instance where our content is served from.

Image 2 – User Flow

 

I’ve created two content servers using the instructions found here. I have changed the home banners for the regions we will be serving our content from so that I can identify which path I am routed through. With our content servers created we build an Application Load Balancer for each and wait for them to become healthy and in-service.

Image 3 – Shaun’s Global Website

Creating the Global Accelerator is as simple as choosing a name, specifying the listener type (port 80 and TCP for WordPress) and creating some endpoint groups for each region. Let’s configure a listener for our Accelerator that clients connect to once onboard the edge network. As we are serving HTTP traffic, port 80 is a natural choice. I have enabled client affinity using SourceIP which redirects our test clients to the same region and application once they have connected for the first time.

 

Endpoint groups are targets for our Accelerator, by default each group has a traffic dial of 100. Turning down the traffic dial allows redirection of clients to other endpoint groups or another AWS region, handy for performing maintenance or dealing with an unexpected traffic surge. For our experiment, I choose the Tokyo and Singapore region with the default dial of 100.

Image 4 – Configuring endpoint groups

Health checks are a powerful tool that can be used either in a simple configuration or provide deep application awareness. Today I am serving a simple website using the default HTTP health check, polling for a 200 OK HTTP on the default path. To complete our configuration we need to populate our endpoint groups with the Application Load Balancers we created earlier.

Image 5 – Adding our ALB’s to an Endpoint Group

With everything configured we can start routing traffic through our two Anycast IP addresses assigned by the Accelerator. This can be done with your browser, an HTTP client or curl. As I want to test a global audience, I will use a proxy to set my location through various locations across Asia, America, and Europe to see how our traffic is routed.

Image 6 – Requests being distributed to our global website.

One of the most powerful features of AWS Global Accelerator is the ability to fail between regions in less than a minute. I’ve set up a load test to hit the site with 100 requests per second and will turn off the Singapore server to test how fast our traffic is routed through to our Tokyo endpoint.

Traffic starts routing through our Accelerator at 03:15, at 3:30 I shut down the Singapore instance. At 3:31 Tokyo has already processed close to 4,000 requests and is serving all the traffic. At 3:35 I enable the Singapore server. Because of the health check warm up (90 seconds), we don’t start seeing recovery until 3:38. If I had configured a more aggressive health check we would fail and recover within five minutes!

Availability and Pricing

In AWS Global Accelerator, you are charged for each accelerator that is deployed and the amount of traffic in the dominant direction that flows through the accelerator. An accelerator is the resource you create to direct traffic to optimal endpoints over the AWS global network. Customers will typically set up one accelerator for each application, but more complex applications may require more than one accelerator. For every accelerator that is running, you are charged a fixed hourly fee and an incremental charge over your standard Data Transfer rates, also called a Data Transfer-Premium fee (DT-Premium). DT-Premium is calculated every hour on the dominant direction of your traffic, i.e. inbound traffic to your application or outbound traffic from your application to your users on the internet.

Fixed fee: For every full or partial hour when an accelerator runs in your account, you are charged $0.025.
Data Transfer-Premium fee (DT-Premium): This is a rate per gigabyte of data transferred over the AWS network. The DT-Premium rate depends on the AWS Region (source) that serves the request and the AWS edge location (destination) where the responses are directed. You will only be charged DT-Premium in the dominant data transfer direction.

Destination (AWS edge locations)

Source

(AWS Regions)

 NA EU APAC NA $ 0.015 /GB $ 0.015 /GB $ 0.035 /GB EU $ 0.015 /GB $ 0.015 /GB $ 0.043 /GB APAC $ 0.012 /GB $ 0.043 /GB $ 0.010 /GB

AWS Global Accelerator is available in US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo) and Asia Pacific (Singapore).

Categories: Cloud

AWS Previews and Pre-Announcements at re:Invent 2018 – Midnight Madness

AWS Blog - Mon, 11/26/2018 - 06:03

As promised in Welcome to AWS re:Invent 2018, here’s a summary of the launches and previews from Midnight Madness:

Launches
Here are detailed blog posts & whats new pages for last night’s launches:

Amazon S3 Batch Operations
This upcoming feature will allow you to make large-scale, parallel changes across millions or billions of S3 objects with a couple of clicks. You will be able to copy objects between buckets, replace object tag sets, update access controls, restore objects from Amazon Glacier, and invoke AWS Lambda functions. Sign up for the preview.

EFS Infrequent Access Storage Class
As part of a new Lifecycle Management option for Amazon Elastic File System file systems, you will be able to indicate that you want to move files that have not been accessed in the last 30 days to a storage class that is 85% less expensive. The new storage class is totally transparent. You can still access your files as needed and in the usual way, with no code or operational changes necessary.

Stay Tuned
I am looking forward to writing about each of these services when they are ready to launch, so stay tuned!

Jeff;

Categories: Cloud

New – Automatic Cost Optimization for Amazon S3 via Intelligent Tiering

AWS Blog - Mon, 11/26/2018 - 06:00

Amazon Simple Storage Service (S3) has been around for over 12.5 years, stores trillions of objects, and processes millions of requests for them every second. Our customers count on S3 to support their backup & recovery, data archiving, data lake, big data analytics, hybrid cloud storage, cloud-native storage, and disaster recovery needs. Starting from the initial one-size-fits-all Standard storage class, we have added additional classes in order to better serve our customers. Today, you can choose from four such classes, each designed for a particular use case. Here are the current options:

Standard – Designed for frequently accessed data.

Standard-IA – Designed for long-lived, infrequently accessed data.

One Zone-IA – Designed for long-lived, infrequently accessed, non-critical data.

Glacier – Designed for long-lived, infrequent accessed, archived critical data.

You can choose the applicable storage class when you upload your data to S3, and you can also use S3’s Lifecycle Policies to tell S3 to transition objects from Standard to Standard-IA, One Zone-IA, or Glacier based on their creation date. Note that the Reduced Redundancy storage class is still supported, but we recommend the use of One Zone-IA for new applications.

If you want to tier between different S3 storage classes today, Lifecycle Policies automates moving objects based on the creation date of the object in storage. If your data is stored in Standard storage today and you want to find out if some of that storage is suited to the S-IA storage class, you can use Storage Class Analytics in the S3 Console to identify what groups of objects to tier using Lifecycle. However, there are many situations where the access pattern of data is irregular or you simply don’t know because your data set is accessed by many applications across an organization. Or maybe you are spending so much focusing on your app, you don’t have time to use tools like Storage Class Analysis.

New Intelligent Tiering
In order to make it easier for you to take advantage of S3 without having to develop a deep understanding of your access patterns, we are launching a new storage class, S3 Intelligent-Tiering. This storage class incorporates two access tiers: frequent access and infrequent access. Both access tiers offer the same low latency as the Standard storage class. For a small monitoring and automation fee, S3 Intelligent-Tiering monitors access patterns and moves objects that have not been accessed for 30 consecutive days to the infrequent access tier. If the data is accessed later, it is automatically moved back to the frequent access tier. The bottom line: You save money even under changing access patterns, with no performance impact, no operational overhead, and no retrieval fees.

You can specify the use of the Intelligent-Tiering storage class when you upload new objects to S3. You can also use a Lifecycle Policy to effect the transition after a specified time period. There are no retrieval fees and you can use this new storage class in conjunction with all other features of S3 including cross-region replication, encryption, object tagging, and inventory.

If you are highly confident that your data is accessed infrequently, the Standard-IA storage class is still a better choice with respect to cost savings. However, if you don’t know your access patterns or if they are subject to change, Intelligent-Tiering is for you!

Intelligent Tiering in Action
I simply choose the new storage class when I upload objects to S3:

I can see the storage class in the S3 Console, as usual:

And I can create Lifecycle Rules that make use of Intelligent-Tiering:

And that’s just about it. Here are a few things that you need to know:

Object Size – You can use Intelligent-Tiering for objects of any size, but objects smaller than 128 KB will never be transitioned to the infrequent access tier and will be billed at the usual rate for the frequent access tier.

Object Life – This is not a good fit for objects that live for less than 30 days; all objects will be billed for a minimum of 30 days.

Durability & Availability – The Intelligent-Tiering storage class is designed for 99.9% availability and 99.999999999% durability, with an SLA that provides for 99.0% availability.

Pricing – Just like the other storage classes, you pay for monthly storage, requests, and data transfer. Storage for objects in the frequent access tier is billed at the same rate as S3 Standard; storage for objects in the infrequent access tier is billed at the same rate as S3 Standard-Infrequent Access. When you use Intelligent-Tiering, you pay a small monthly per-object fee for monitoring and automation; this means that the storage class becomes even more economical as object sizes grow. As I noted earlier, S3 Intelligent-Tiering will automatically move data back to the frequent access tier based on access patterns but there is no retrieval charge.

Query in Place – Queries made using S3 Select do not alter the storage tier. Amazon Athena and Amazon Redshift Spectrum access the data using the regular GET operation and will trigger a transition.

API and CLI Access – You can use the storage class INTELLIGENT_TIERING from the S3 CLI and S3 APIs.

Available Now
This new storage class is available now and you can start using it today in all AWS Regions.

Jeff;

PS – Remember the trillions of objects and millions of requests that I just told you about? We fed them into an Amazon Machine Learning model and used them to predict future access patterns for each object. The results were then used to inform storage of your S3 objects in the most cost-effective way possible. This is a really interesting benefit that is made possible by the incredible scale of S3 and the diversity of use cases that it supports. There’s nothing else like it, as far as I know!

Categories: Cloud

New – AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3

AWS Blog - Mon, 11/26/2018 - 06:00

Many organizations use SFTP (Secure File Transfer Protocol) as part of long-established data processing and partner integration workflows. While it would be easy to dismiss these systems as “legacy,” the reality is that they serve a useful purpose and will continue to do so for quite some time. We want to help our customers to move these workflows to the cloud in a smooth, non-disruptive way.

AWS Transfer for SFTP
Today we are launching AWS Transfer for SFTP, a fully-managed, highly-available SFTP service. You simply create a server, set up user accounts, and associate the server with one or more Amazon Simple Storage Service (S3) buckets. You have fine-grained control over user identity, permissions, and keys. You can create users within Transfer for SFTP, or you can make use of an existing identity provider. You can also use IAM policies to control the level of access granted to each user. You can also make use of your existing DNS name and SSH public keys, making it easy for you to migrate to Transfer for SFTP. Your customers and your partners will continue to connect and to make transfers as usual, with no changes to their existing workflows.

You have full access to the underlying S3 buckets and you can make use of many different S3 features including lifecycle policies, multiple storage classes, several options for server-side encryption, versioning, and so forth. You can write AWS Lambda functions to to build an “intelligent” FTP site that processes incoming files as soon as they are uploaded, query the files in situ using Amazon Athena, and easily connect to your existing data ingestion process. On the outbound side, you can generate reports, documents, manifests, custom software builds and so forth using other AWS services, and then store them in S3 for each, controlled distribution to your customers and partners.

Creating a Server
To get started, I open up the AWS Transfer for SFTP Console and click Create server:

I can have Transfer for SFTP manage user names and passwords, or I can access an existing LDAP or Active Directory identify provider via API Gateway. I can use a Amazon Route 53 DNS alias or an existing hostname, and I can tag my server. I start with default values and click Create server to actually create my SFTP server:

It is up and running within minutes:

Now I can add a user or two! I select the server and click Add user, then enter the user name, pick the S3 bucket (with an optional prefix) for their home directory, and select an IAM role that gives the user the desired access to the bucket. Then I paste the SSH public key (created with ssh-keygen), and click Add:

And now I am all set. I retrieve the server endpoint from the console and issue my first sftp command:

The files are visible in the jeff/ section of the S3 bucket immediately:

I could attach a Lambda function to the bucket and do any sort of post-upload processing I want. For example, I could run all uploaded images through Amazon Rekognition and route them to one of several different destinations depending on the types of objects that it contains, and I could run audio files through Amazon Comprehend to perform a speech to text operation.

Full Control via IAM
In order to get right to the point in my walk-through, my IAM role uses this very simple policy:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ], "Resource": "*" }, { "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::data-transfer-inbound" }, { "Effect": "Allow", "Action": "s3:*", "Resource": "arn:aws:s3:::data-transfer-inbound/jeff/*" } ] }

If I plan to host lots of users on the same server, I can make use of a scope-down policy that looks like this:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "ListHomeDir", "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::${transfer:HomeBucket}" }, { "Sid": "AWSTransferRequirements", "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ], "Resource": "*" }, { "Sid": "HomeDirObjectAccess", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObjectVersion", "s3:DeleteObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::${transfer:HomeDirectory}*" } ] }

The ${transfer:HomeBucket} and ${transfer:HomeDirectory} policy variables will be set to appropriate values for each user when the scope-down policy is evaluated; this allows me to use the same policy, suitably customized, for each user.

Things to Know
Here are a couple of things to keep in mind regarding AWS Transfer for SFTP:

Programmatic Access – A full set of APIs and CLI commands is also available. For example, I can create a server with one simple command:

$ aws transfer create-server --identity-provider-type SERVICE_MANAGED ------------------------------------- | CreateServer | +-----------+-----------------------+ | ServerId | s-b445dcff7f164c73a | +-----------+-----------------------+

There are many other commands including list-servers, start-server, stop-server, create-user, and list-users.

CloudWatch  – Each server can optionally send detailed access logs to Amazon CloudWatch. There’s a separate log stream for each SFTP session and one more for authentication errors:

Alternate Identity Providers – I showed you the built-in user management above. You can also access an alternate identity provider that taps into your existing LDAP or Active Directory.

Pricing – You pay a per-hour fee for each running server and a per-GB data upload and download fee.

Available Now
AWS Transfer for SFTP is available today in US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Canada (Central), Europe (Ireland), Europe (Paris), Europe (Frankfurt), Europe (London), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Seoul) Regions.

Jeff;

Categories: Cloud

New – AWS DataSync – Automated and Accelerated Data Transfer

AWS Blog - Mon, 11/26/2018 - 06:00

Many AWS customers have told us that they need to move large amounts of data into and out of the AWS Cloud. Their use cases include:

Migration – Some customers have large data sets that are in a constant state of flux. Their is no natural break or stopping point that they can use to effect a one-time transfer.

Upload & Process – Other customers regularly generate massive data sets on-premises for processing in the cloud. This includes our customers in the media & entertainment, oil & gas, and life sciences industries.

Backup / DR – Finally, other customers copy their precious on-premises data to the cloud for safekeeping and to ensure business continuity.

These customers work at scale! One-time or periodic transfers of tens or hundreds of terabytes are routine. At this scale, making effective use of network bandwidth and achieving high throughput are essential, with reliability, security, and ease of use equally important.

Introducing AWS DataSync
Today we are adding AWS DataSync to our portfolio of data transfer services. Joining AWS Snowball, AWS Snowmobile, Kinesis Data Firehose, S3 Transfer Acceleration, and AWS Storage Gateway, AWS DataSync is built around a super-efficient, purpose-built data transfer protocol that can run 10 times as fast as open source data transfer. It easy to set up and to use (Console and CLI access is available) and scales to the sky!

AWS DataSync is a managed service and you pay only for the data that you transfer. It can sync on-premises data to Amazon Simple Storage Service (S3) buckets or Amazon Elastic File System across the Internet or via AWS Direct Connect, and can also sync from AWS to data stored on-premises.

The AWS DataSync Agent is an important part of the service. You deploy the VM in your on-premises data center where it will act as a client to your NFS storage and accelerate the data transfer.

AWS DataSync in Action
Let’s take AWS DataSync for a spin! The AWS DataSync team set up a test environment for me that included the Agent and an NFS server.

Armed with the public IP address of the Agent, I open the AWS DataSync Console and click Get started:

My use case is on-premises to AWS. I select that option, and click Create agent to connect to my on-premises agent:

I download and run the VM image (this was already taken care of for me), enter the public IP address for the agent, and click Get key. Then I name & tag my agent, and click Create agent:

My agent is ready right away and I am ready to create a DataSync task to indicate what I want to sync and when I want to sync it! I click Create task to do this:

I select my use case again, and click Next to proceed:

I create a source location and point it to my NFS server, then click Next (I can configure and use multiple agents in order to increase overall throughput):

Now I create a destination location, choosing between an EFS file system and an S3 bucket:

Next, I create my task. I give it a name and accept all of the default values, and review it (not shown) on the next page. As you can see, I have options to control copying, file management, and use of bandwidth:

My task is ready to use:

I select it and either run it as-is, or override my settings:

The transfer starts right away and I can watch as it progresses:

The transfer takes place across an SSL connection; my bucket quickly fills up with files:

And I can see the final status:

If I run it again without making any changes to the source files, it verifies that the files on both ends are the same, and copies nothing:

If I had changed the files or their permissions, DataSync would transfer the changes in order to make sure that the source and the destination match. The transfers are always incremental, making DataSync perfect for those migration and disaster recovery use cases that I described earlier.

Things to Know
Here are a couple of things that you need to know about AWS DataSync:

Source/Destination – You can transfer from your on-premises servers to AWS and vice versa.

Performance – The overall data transfer speed is a function of overall network conditions; a single agent can saturate a 10 Gbps network link.

Pricing – You pay a low, per-GB charge for data transfer; there is no charge for the service itself.

Available Now
AWS DataSync is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions.

Jeff;

 

Categories: Cloud

AWS RoboMaker – Develop, Test, Deploy, and Manage Intelligent Robotics Apps

AWS Blog - Mon, 11/26/2018 - 00:02

I have wanted to build a robot for decades and now I have my chance! To me, the big challenge has always been the sheer number of different parts that need to connect and interoperate. Complex hardware, software, sensors, communication systems, and a “robot brain” must all work together in order for the robot to function as desired.

Today I would like to tell you about AWS RoboMaker. This new service will help you to develop, simulate, test, and deploy the robot of your dreams. You can develop your code inside of a cloud-based development environment, test it in a Gazebo simulation, and then deploy your finished code to a fleet of one or more robots. Once your code is deployed, you can push updates and bug fixes out to your entire fleet with a couple of clicks. Your code can make use of AWS services such as Amazon Lex, Polly, Amazon Rekognition, Amazon Kinesis Video Streams, and Amazon CloudWatch to build a sophisticated robot brain, accessible as a set of packages for ROS (Robotic Operating System). You can also build and train Amazon SageMaker models in order to make use of Machine Learning in your robot brain.

RoboMaker is designed to work with robots of many different shapes and sizes running in many different physical environments: a home workshop, a factory floor, a classroom, a restaurant, a hotel, or even another planet!

Let’s take a look…

AWS RoboMaker in Action – Running a Simulation
My robot adventure starts at the RoboMaker Console (API and CLI access is also available); I click on Try sample application to get started:

RoboMaker includes a nice selection of sample applications that I can use to get started. I’ll choose the second one, Robot Monitoring, and click Launch:

A CloudFormation stack is launched to create a VPC, a RoboMaker Simulation Job, and a Lambda function. This takes just a few minutes and I can see my job in the Console:

I click on the job and I can learn more about it:

The next part of the page is the most interesting. The simulation is running in the background and I have four tools to view and interact with it:

Gazebo is the actual robot simulator. I can watch the robot wander through the scene and interact with the Gazebo UI in the usual way:

Rqt is a GUI tool for ROS development. I can use it to inspect various aspects of my robot, such as the computation graph:

I can also get a robot’s-eye view of the simulation:

Rviz gives me another view of the state of my simulation and my robot:

Terminal gives me shell access to the EC2 instance that is running my job:

I can also watch all four of them at once:

Remember that the name of this sample is Monitor Fleets of Robots with Amazon CloudWatch. The code is running in the simulator and I can check the CloudWatch metrics. The most interesting one is the distance between the robot and the goal:

AWS RoboMaker in Action – Running a Development Environment
I actually started in the middle of the story by showing you how to run a simulation. Let’s back up a step, create a development environment, and see how the story starts. RoboMaker helps me to create and manage multiple development environments. I click on Create environment to get started:

I give my environment a name, use the default instance type, choose a VPC and subnet, and click Create to proceed:

When my environment is ready I click Open environment to proceed:

Cloud9 is up and running within a minute or so, and I can access the sample RoboMaker applications with a click:

Each sample includes all of the files for the code that will run on the robot and for simulator environment:

I can modify the code, build and package it into a bundle, and then restart the simulator to see my modifications in action.

AWS RoboMaker in Action – Deploying Code and Managing a Robot Fleet
The next step is to create the application and deploy it to a genuine robot. Back when the days were long, AWS re:Invent was months away, and I seemingly had all the time in the world, I purchased and assembled a TurtleBot3 robot with the intention of showing it in action as the final episode in this story. However, time passed way too quickly and I have not had time to do the final setup. The robot itself was fun to assemble (tweezers, a steady hand, and a good light are recommended):

RoboMaker lets me create my robot and assign it to an AWS Greengrass group:

Then I would create a fleet, add Johnny5 to it, and deploy my code! Behind the scenes, the deployment system makes use of the Greengrass OTA (Over the Air) update mechanism.

Rolling Ahead
I’ve done my best to show you some of the more interesting aspects of AWS RoboMaker, but there’s a lot more to talk about. Here are a few quick notes:

Programmability – RoboMaker includes a rich set of functions that allow you to create, list, and manage simulation jobs, applications, robots, and fleets.

Parallel Simulation – After you design and code an algorithm for your robot, you can create parallel simulation jobs in order to quickly see how your algorithm performs in different conditions or environments. For example, you could use tens or hundreds of real-world models of streets or offices to test a wayfinding or driving algorithm.

Powered by AWS – The code that you write for execution on your robot can use our ROS packages for access to relevant AWS services such as Rekognition, Lex, and Kinesis Video Streams.

ROS – ROS is an open source project. We are contributing code and expertise, including the packages that provide access to AWS. To learn more about ROS, read The Open Source Robot Operating System (ROS) and AWS RoboMaker.

Pricing – There’s no charge to use ROS in your robot apps. Cloud9 makes use of EC2 and EBS and you pay for any usage beyond the AWS Free Tier. Your simulations are billed based on Simulation Units. You also pay for the use of Greengrass and for any AWS services (Lex, Polly, and so forth) that your code uses.

Available Now
AWS RoboMaker is available now and you can start building cool robot apps today! We are launching in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) regions, with Asia Pacific (Tokyo) next on the list.

Jeff;

PS – I will find the time to write and share a cool app for the TurtleBot3, so stay tuned!

Categories: Cloud

Welcome to AWS re:Invent 2018

AWS Blog - Sun, 11/25/2018 - 20:46

It is Sunday night and AWS re:Invent 2018 is underway. I hope that you are as excited as I am to be able to learn about our latest and greatest services and features!

A New Approach
After speaking to attendees at last year’s re:Invent, looking at data, and listening to lots of comments from readers, we decided to take a slightly different approach to blogging than we have in years past. While we know that you enjoy reading about everything that’s new, we also know that you strongly prefer to read about launches that are actionable (to use my favorite phrase, “available now and you can start using it today”).

With that in mind, the re:Invent bloggers (Julien, Shaun, Danilo, Abby, and I) decided to devote most of our energy to the actionable launches, doing our best to delight you with our traditional detailed and well-illustrated blog posts. We’ve been working non-stop to write posts that accurately convey the most important aspects of each launch. Feedback from past years also told us that you preferred posts that were tight and to the point, and we have done our best to meet that expectation.

One other thing to know – in order to make sure that we can publish these posts mere seconds after the announcements, we have kept them free of links to product pages, consoles, documentation, and other newly launched services. We will add links as time allows after the dust settles.

Previews and Preannouncements
Instead of spilling all of the beans now and leaving little to write about at launch time, we have grouped many of the previews and preannouncements into a small set of summary posts. The previews generally include sign-up links that will allow you to express your interest in getting access to the service or feature while it is still under development.

Learn More
To learn more about all of announcements that we are making at re:Invent, be sure to read What’s New at AWS on a regular basis (or subscribe to the RSS feed). I will also pick a few of my most favorite launches to share in video form.

Jeff;

PS – I’m never too busy for a handshake or a selfie, so be sure to stop me and say hello if you see me! I’ve also got plenty of stickers.

 

Categories: Cloud

PHP 7.3.0RC6 Released

PHP News - Thu, 11/22/2018 - 03:22
Categories: PHP

New AWS Resource Access Manager – Cross-Account Resource Sharing

AWS Blog - Wed, 11/21/2018 - 10:26

As I have discussed in the past, our customers use multiple AWS accounts for many different reasons. Some of them use accounts to create administrative and billing boundaries; others use them to control the blast radius around any mistakes that they make.

Even though all of this isolation is a net positive for our customers, it turns out that certain types of sharing can be useful and beneficial. For example, many customers want to create resources centrally and share them across accounts in order to reduce management overhead and operational costs.

AWS Resource Access Manager
The new AWS Resource Access Manager (RAM) facilitates resource sharing between AWS accounts. It makes it easy to share resources within your AWS Organization and can be used from the Console, CLI, or through a set of APIs. We are launching with support for Route 53 Resolver Rules (announced yesterday in Shaun’s excellent post) and will be adding more types of resources soon.

To share resources, you simply create a Resource Share, give it a name, add one or more of your resources to it, and grant access to other AWS accounts. Each Resource Share is like a shopping cart, and can hold resources of differing types. You can share any resources that you own, but you cannot re-share resources that have been shared with you. You can share resources with Organizations, Organizational Units (OUs), or AWS accounts. You can also control whether accounts from outside of your Organization can be added to a particular Resource Share.

The master account for your Organization must enable sharing on the Settings page of the RAM Console:

After that, sharing a resource with another account in your Organization makes the resources available with no further action on either side (RAM takes advantage of the handshake that was done when the account was added to the Organization). Sharing a resource with an account outside of your Organization sends an invitation that must be accepted in order to make the resource available to the account.

When resources are shared with an account (let’s call it the consuming account) the shared resources will show up on the appropriate console page along with the resources owned by the consuming account. Similarly, Describe/List calls will return both shared resources and resources owned by the consuming account.

Resource Shares can be tagged and you can reference the tags in IAM policies to create a tag-based permission system. You can add and remove accounts and resources from a Resource Share at any time.

Using AWS Resource Access Manager
I open the RAM Console and click Create a resource share to get started:

I enter a name for my share (CompanyResolvers) and choose the resources that I want to add:

As I mentioned earlier, we’ll be adding more resource types soon!

I enter the principals (Organizations, OUs, or AWS accounts) that I want to share the resources with, and click Create resource share:

The other accounts receive invitations if they are outside of my Organization. The invitations are visible in, and can be accepted from, the console. After accepting the invites, and with proper IAM permissions, they have access to the resources.

RAM also gives me centralized access to everything that I have shared, and everything that has been shared with me:

You can also automate the sharing process using functions like CreateResourceShare, UpdateResourceShare, GetResourceShareInvitations, and AcceptResourceShareInvitation. You can, of course, use IAM policies to regulate the use of these functions on both sides of the transaction.

There are no charges for resource sharing.

Available Now
AWS Resource Access Manager (RAM) and is available now and you can start sharing resources today.

Jeff;

Categories: Cloud

Coming Soon – Snowball Edge with More Compute Power and a GPU

AWS Blog - Tue, 11/20/2018 - 17:39

I never get tired of seeing customer-driven innovation in action! When AWS customers told us that they needed an easy way to move petabytes of data in and out of AWS, we responded with the AWS Snowball. Later, when they told us that they wanted to do some local data processing and filtering (often at disconnected sites) before sending the devices and the data back to AWS, we launched the AWS Snowball Edge, which allowed them to use AWS Lambda functions for local processing. Earlier this year we added support for EC2 Compute Instances, with six instances sizes and the ability to preload up to 10 AMIs onto each device.

Great progress, but we are not done yet!

More Compute Power and a GPU
I’m happy to tell you that we are getting ready to give you two new Snowball Edge options: Snowball Edge Compute Optimized and Snowball Edge Compute Optimized with GPU (the original Snowball Edge is now called Snowball Edge Storage Optimized). Both options include 42 TB of S3-compatible storage and 7.68 TB of NVMe SSD storage, and allow you to run any combination of instances that consume up to 52 vCPUs and 208 GiB of memory. The additional processing power gives you the ability to do even more types of processing at the edge.

Here are the specs for the instances:

Instance Name vCPUs Memory sbe-c.small / sbe-g.small
1 2 GiB sbe-c.medium / sbe-g.medium
1 4 GiB sbe-c.large / sbe-g.large
2 8 GiB sbe-c.xlarge / sbe-g.xlarge
4 16 GiB sbe-c.2xlarge / sbe-g.2xlarge
8 32 GiB sbe-c.4xlarge / sbe-g.4xlarge
16 64 GiB sbe-c.8xlarge / sbe-g.8xlarge
32 128 GiB sbe-c.12xlarge / sbe-g.12xlarge
48 192 GiB

The Snowball Edge Compute Optimized with GPU includes an on-board GPU that you can use to do real-time full-motion video analysis & processing, machine learning inferencing, and other highly parallel compute-intensive work. You can launch an sbe-g instance to gain access to the GPU.

You will be able to select the option you need using the console, as always:

The Compute Optimized device is just a tad bigger than the Storage Optimized Device. Here they are, sitting side-by-side on an Amazon door desk:

Stay Tuned
I’ll have more information to share soon, so stay tuned!

Jeff;

Categories: Cloud

New – Predictive Scaling for EC2, Powered by Machine Learning

AWS Blog - Tue, 11/20/2018 - 16:36

When I look back on the history of AWS and think about the launches that truly signify the fundamentally dynamic, on-demand nature of the cloud, two stand out in my memory: the launch of Amazon EC2 in 2006 and the concurrent launch of CloudWatch Metrics, Auto Scaling, and Elastic Load Balancing in 2009. The first launch provided access to compute power; the second made it possible to use that access to rapidly respond to changes in demand. We have added a multitude of features to all of these services since then, but as far as I am concerned they are still central and fundamental!

New Predictive Scaling
Today we are making Auto Scaling even more powerful with the addition of predictive scaling. Using data collected from your actual EC2 usage and further informed by billions of data points drawn from our own observations, we use well-trained Machine Learning models to predict your expected traffic (and EC2 usage) including daily and weekly patterns. The model needs at least one day’s of historical data to start making predictions; it is re-evaluated every 24 hours to create a forecast for the next 48 hours.

We’ve done our best to make this really easy to use. You enable it with a single click, and then use a 3-step wizard to choose the resources that you want to observe and scale. You can configure some warm-up time for your EC2 instances, and you also get to see actual and predicted usage in a cool visualization! The prediction process produces a scaling plan that can drive one or more groups of Auto Scaled EC2 instances.

Once your new scaling plan is in action, you will be able to scale proactively, ahead of daily and weekly peaks. This will improve the overall user experience for your site or business, and it can also help you to avoid over-provisioning, which will reduce your EC2 costs.

Let’s take a look…

Predictive Scaling in Action
The first step is to open the Auto Scaling Console and click Get started:

I can select the resources to be observed and predictively scaled in three different ways:

I select an EC2 Auto Scaling group (not shown), then I assign my group a name, pick a scaling strategy, and leave both Enable predictive scaling and Enable dynamic scaling checked:

As you can see from the screen above, I can use predictive scaling, dynamic scaling, or both. Predictive scaling works by forecasting load and scheduling minimum capacity; dynamic scaling uses target tracking to adjust a designated CloudWatch metric to a specific target. The two models work well together because of the scheduled minimum capacity already set by predictive scaling.

I can also fine-tune the predictive scaling, but the default values will work well to get started:

I can forecast on one of three pre-chosen metrics (this is in the General settings):

Or on a custom metric:

I have the option to do predictive forecasting without actually scaling:

And I can set up a buffer time so that newly launched instances can warm up and be ready to handle traffic at the predicted time:

After a couple more clicks, the scaling plan is created and the learning/prediction process begins! I return to the console and I can see the forecasts for CPU Utilization (my chosen metric) and for the number of instances:

I can see the scaling actions that will implement the predictions:

I can also see the CloudWatch metrics for the Auto Scaling group:

And that’s all you need to do!

Here are a couple of things to keep in mind about predictive scaling:

Timing – Once the initial set of predictions have been made and the scaling plans are in place, the plans are updated daily and forecasts are made for the following 2 days.

Cost – You can use predictive scaling at no charge, and may even reduce your AWS charges.

Resources – We are launching with support for EC2 instances, and plan to support other AWS resource types over time.

Applicability – Predictive scaling is a great match for web sites and applications that undergo periodic traffic spikes. It is not designed to help in situations where spikes in load are not cyclic or predictable.

Long-Term Baseline – Predictive scaling maintains the minimum capacity based on historical demand; this ensures that any gaps in the metrics won’t cause an inadvertent scale-in.

Available Now
Predictive scaling is available now and you can starting using it today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore) Regions.

Jeff;

 

Categories: Cloud

Dutch PHP Conference 2019

PHP News - Tue, 11/20/2018 - 06:02
Categories: PHP

Pages

Subscribe to LAMP, Database and Cloud Technical Information aggregator


Main menu 2

by Dr. Radut