Jump to Navigation

Feed aggregator

Amazon Kinesis Video Streams Adds Support For HLS Output Streams

AWS Blog - Fri, 07/13/2018 - 16:10

Today I’m excited to announce and demonstrate the new HTTP Live Streams (HLS) output feature for Amazon Kinesis Video Streams (KVS). If you’re not already familiar with KVS, Jeff covered the release for AWS re:Invent in 2017. In short, Amazon Kinesis Video Streams is a service for securely capturing, processing, and storing video for analytics and machine learning – from one device or millions. Customers are using Kinesis Video with machine learning algorithms to power everything from home automation and smart cities to industrial automation and security.

After iterating on customer feedback, we’ve launched a number of features in the past few months including a plugin for GStreamer, the popular open source multimedia framework, and docker containers which make it easy to start streaming video to Kinesis. We could talk about each of those features at length, but today is all about the new HLS output feature! Fair warning, there are a few pictures of my incredibly messy office in this post.

HLS output is a convenient new feature that allows customers to create HLS endpoints for their Kinesis Video Streams, convenient for building custom UIs and tools that can playback live and on-demand video. The HLS-based playback capability is fully managed, so you don’t have to build any infrastructure to transmux the incoming media. You simply create a new streaming session, up to 5 (for now), with the new GetHLSStreamingSessionURL API and you’re off to the races. The great thing about HLS is that it’s already an industry standard and really easy to leverage in existing web-players like JW Player, hls.js, VideoJS, Google’s Shaka Player, or even rendering natively in mobile apps with Android’s Exoplayer and iOS’s AV Foundation. Let’s take a quick look at the API, feel free to skip to the walk-through below as well.

Kinesis Video HLS Output API

The documentation covers this in more detail than what we can go over in the Blog but I’ll cover the broad components.

  1. Get an endpoint with the GetDataEndpoint API
  2. Use that endpoint to get an HLS streaming URL with the GetHLSStreamingSessionURL API
  3. Render the content in the HLS URL with whatever tools you want!

This is pretty easy in a Jupyter notebook with a quick bit of Python and boto3.

import boto3 STREAM_NAME = "RandallDeepLens" kvs = boto3.client("kinesisvideo") # Grab the endpoint from GetDataEndpoint endpoint = kvs.get_data_endpoint( APIName="GET_HLS_STREAMING_SESSION_URL", StreamName=STREAM_NAME )['DataEndpoint'] # Grab the HLS Stream URL from the endpoint kvam = boto3.client("kinesis-video-archived-media", endpoint_url=endpoint) url = kvam.get_hls_streaming_session_url( StreamName=STREAM_NAME, PlaybackMode="LIVE" )['HLSStreamingSessionURL']

You can even visualize everything right away in Safari which can render HLS streams natively.

from IPython.display import HTML HTML(data='<video src="{0}" autoplay="autoplay" controls="controls" width="300" height="400"></video>'.format(url))

We can also stream directly from a AWS DeepLens with just a bit of code:

import DeepLens_Kinesis_Video as dkv import time aws_access_key = "super_fake" aws_secret_key = "even_more_fake" region = "us-east-1" stream_name ="RandallDeepLens" retention = 1 #in minutes. wait_time_sec = 60*300 #The number of seconds to stream the data # will create the stream if it does not already exist producer = dkv.createProducer(aws_access_key, aws_secret_key, "", region) my_stream = producer.createStream(stream_name, retention) my_stream.start() time.sleep(wait_time_sec) my_stream.stop()

How to use Kinesis Video Streams HLS Output Streams

We definitely need a Kinesis Video Stream, which we can create easily in the Kinesis Video Streams Console.

Now, we need to get some content into the stream. We have a few options here. Perhaps the easiest is the docker container. I decided to take the more adventurous route and compile the GStreamer plugin locally on my mac, following the scripts on github. Be warned, compiling this plugin takes a while and can cause your computer to transform into a space heater.

With our freshly compiled GStreamer binaries like gst-launch-1.0 and the kvssink plugin we can stream directly from my macbook’s webcam, or any other GStreamer source, into Kinesis Video Streams. I just use the kvssink output plugin and my data will wind up in the video stream. There are a few parameters to configure around this, so pay attention.

Here’s an example command that I ran to stream my macbook’s webcam to Kinesis Video Streams:

gst-launch-1.0 autovideosrc ! videoconvert \ ! video/x-raw,format=I420,width=640,height=480,framerate=30/1 \ ! vtenc_h264_hw allow-frame-reordering=FALSE realtime=TRUE max-keyframe-interval=45 bitrate=500 \ ! h264parse \ ! video/x-h264,stream-format=avc,alignment=au,width=640,height=480,framerate=30/1 \ ! kvssink stream-name="BlogStream" storage-size=1024 aws-region=us-west-2 log-config=kvslog

Now that we’re streaming some data into Kinesis, I can use the getting started sample static website to test my HLS stream with a few different video players. I just fill in my AWS credentials and ask it to start playing. The GetHLSStreamingSessionURL API supports a number of parameters so you can play both on-demand segments and live streams from various timestamps.

Additional Info

Data Consumed from Kinesis Video Streams using HLS is charged $0.0119 per GB in US East (N. Virginia) and US West (Oregon) and pricing for other regions is available on the service pricing page. This feature is available now, in all regions where Kinesis Video Streams is available.

The Kinesis Video team told me they’re working hard on getting more integration with the AWS Media services, like MediaLive, which will make it easier to serve Kinesis Video Stream content to larger audiences.

As always, let us know what you you think on twitter or in the comments. I’ve had a ton of fun playing around with this feature over the past few days and I’m excited to see customers build some new tools with it!

Randall

Categories: Cloud

New – Lifecycle Management for Amazon EBS Snapshots

AWS Blog - Thu, 07/12/2018 - 21:56

It is always interesting to zoom in on the history of a single AWS service or feature and watch how it has evolved over time in response to customer feedback. For example, Amazon Elastic Block Store (EBS) launched a decade ago and has been gaining more features and functionality every since. Here are a few of the most significant announcements:

Several of the items that I chose to highlight above make EBS snapshots more useful and more flexible. As you may already know, it is easy to create snapshots. Each snapshot is a point-in-time copy of the blocks that have changed since the previous snapshot, with automatic management to ensure that only the data unique to a snapshot is removed when it is deleted. This incremental model reduces your costs and minimizes the time needed to create a snapshot.

Because snapshots are so easy to create and use, our customers create a lot of them, and make great use of tags to categorize, organize, and manage them. Going back to my list, you can see that we have added multiple tagging features over the years.

Lifecycle Management – The Amazon Data Lifecycle Manager
We want to make it even easier for you to create, use, and benefit from EBS snapshots! Today we are launching Amazon Data Lifecycle Manager to automate the creation, retention, and deletion of Amazon EBS volume snapshots. Instead of creating snapshots manually and deleting them in the same way (or building a tool to do it for you), you simply create a policy, indicating (via tags) which volumes are to be snapshotted, set a retention model, fill in a few other details, and let Data Lifecycle Manager do the rest. Data Lifecycle Manager is powered by tags, so you should start by setting up a clear and comprehensive tagging model for your organization (refer to the links above to learn more).

It turns out that many of our customers have invested in tools to automate the creation of snapshots, but have skimped on the retention and deletion. Sooner or later they receive a surprisingly large AWS bill and find that their scripts are not working as expected. The Data Lifecycle Manager should help them to save money and to be able to rest assured that their snapshots are being managed as expected.

Creating and Using a Lifecycle Policy
Data Lifecycle Manager uses lifecycle policies to figure out when to run, which volumes to snapshot, and how long to keep the snapshots around. You can create the policies in the AWS Management Console, from the AWS Command Line Interface (CLI) or via the Data Lifecycle Manager APIs; I’ll use the Console today. Here are my EBS volumes, all suitably tagged with a department:

I access the Lifecycle Manager from the Elastic Block Store section of the menu:

Then I click Create Snapshot Lifecycle Policy to proceed:

Then I create my first policy:

I use tags to specify the volumes that the policy applies to. If I specify multiple tags, then the policy applies to volumes that have any of the tags:

I can create snapshots at 12 or 24 hour intervals, and I can specify the desired snapshot time. Snapshot creation will start no more than an hour after this time, with completion based on the size of the volume and the degree of change since the last snapshot.

I can use the built-in default IAM role or I can create one of my own. If I use my own role, I need to enable the EC2 snapshot operations and all of the DLM (Data Lifecycle Manager) operations; read the docs to learn more.

Newly created snapshots will be tagged with the aws:dlm:lifecycle-policy-id and  aws:dlm:lifecycle-schedule-name automatically; I can also specify up to 50 additional key/value pairs for each policy:

I can see all of my policies at a glance:

I took a short break and came back to find that the first set of snapshots had been created, as expected (I configured the console to show the two tags created on the snapshots):

Things to Know
Here are a couple of things to keep in mind when you start to use Data Lifecycle Manager to automate your snapshot management:

Data Consistency – Snapshots will contain the data from all completed I/O operations, also known as crash consistent.

Pricing – You can create and use Data Lifecyle Manager policies at no charge; you pay the usual storage charges for the EBS snapshots that it creates.

Availability – Data Lifecycle Manager is available in the US East (N. Virginia), US West (Oregon), and EU (Ireland) Regions.

Tags and Policies – If a volume has more than one tag and the tags match multiple policies, each policy will create a separate snapshot and both policies will govern the retention. No two policies can specify the same key/value pair for a tag.

Programmatic Access – You can create and manage policies programmatically! Take a look at the CreateLifecyclePolicy, GetLifecyclePolicies, and UpdateLifeCyclePolicy functions to get started. You can also write an AWS Lambda function in response to the createSnapshot event.

Error Handling – Data Lifecycle Manager generates a “DLM Policy State Change” event if a policy enters the error state.

In the Works – As you might have guessed from the name, we plan to add support for additional AWS data sources over time. We also plan to support policies that will let you do weekly and monthly snapshots, and also expect to give you additional scheduling flexibility.

Jeff;

Categories: Cloud

AWS Storage Gateway Recap – SMB Support, RefreshCache Event, and More

AWS Blog - Thu, 07/12/2018 - 07:24

To borrow my own words, the AWS Storage Gateway is a service that includes a multi-protocol storage appliance that fits in between your existing application and the AWS Cloud. Your applications see the gateway as a file system, a local disk volume, or a Virtual Tape Library, depending on how it was configured.

Today I would like to share a few recent updates to the File Gateway configuration of the Storage Gateway, and also show you how they come together to enable some new processing models. First, the most recent updates:

SMB Support – The File Gateway already supports access from clients that speak NFS (versions 3 and 4.1 are supported). Last month we added support for the Server Message Block (SMB) protocol. This allows Windows applications that communicate using v2 or v3 of SMB to store files as objects in S3 through the gateway, enabling hybrid cloud use cases such as backup, content distribution, and processing of machine learning and big data workloads. You can control access to the gateway using your existing on-premises Active Directory (AD) domain or a cloud-based domain hosted in AWS Directory Service, or you can use authenticated guest access. To learn more about this update, read AWS Storage Gateway Adds SMB Support to Store and Access Objects in Amazon S3 Buckets.

Cross-Account Permissions – Some of our customers run their gateways in one AWS account and configure them to upload to an S3 bucket owned by another account. This allows them to track departmental storage and retrieval costs using chargeback and showback models. In order to simplify this important use case, you can configure the gateway to provide the bucket owner with full permissions. This avoids a pain point which could arise if the bucket owner was unable to see the objects. To learn how to set this up, read Using a File Share for Cross-Account Access.

Requester Pays – Bucket owners are responsible for storage costs. Owners pay for data transfer costs by default, but also have the option to have the requester pay. To support this use case, the File Gateway now supports S3’s Requester Pays Buckets. Data collectors and aggregators can use this feature to share data with research organizations such as universities and labs without incurring the costs of access themselves. File Gateway provides file based access to the S3 objects, caches recently accessed data locally, helping requesters reduce latency and costs. To learn more, read about Creating an NFS File Share and Creating an SMB File Share.

File Upload Notification – The gateway caches files locally, and uploads them to a designated S3 bucket in the background. Late last year we gave you the ability to request notification (in the form of a CloudWatch Event) when new files have been uploaded. You can use this to initiate cloud-based processing or to implement advanced logging. To learn more, read Getting File Upload Notification and study the NotifyWhenUploaded function.

Cache Refresh Event – You have long had the ability to use the RefreshCache function to make sure that the gateway is aware of objects that have been added, removed, or replaced in the bucket. The new Storage Gateway Cache Refresh Event lets you know that the cache is now in sync with S3, and can be used as a signal to initiate local processing. To learn more, read Getting Refresh Cache Notification.

Hybrid Processing Using File Gateway
You can use the File Upload Notification and Cache Refresh to automate some of your routine hybrid process tasks!

Let’s say that you run a geographically distributed office or retail business, with locations all over the world. Raw data (metrics, cash register receipts, or time sheets) is collected at each location, and then uploaded to S3 using a File Gateway hosted at each location. As the data arrives, you use the File Upload Notifications to process each S3 object, perhaps using an AWS Lambda function that invokes Amazon Athena to run a stock set of queries against each one. The data arrives over the course of a couple of hours, and results accumulate in another bucket. At the end of the reporting period, the intermediate results are processed, custom reports are generated for each branch location, and then stored in another bucket (this bucket, as it turns out, is also associated with a gateway, and each gateway will have cached copies of the prior versions of the reports). After you generate your reports, you can refresh each of the gateway caches, wait for the corresponding notifications, and then send an email to the branch managers to tell them that their new report is available.

Here’s a video (and presentation) with more information about this processing model:

Now Available
All of the features listed above are available now and you can start using them today in all regions where Storage Gateway is available.

Jeff;

Categories: Cloud

AWS re:Invent 2018 is Coming – Are You Ready?

AWS Blog - Tue, 07/10/2018 - 14:45

As I write this, there are just 138 days until re:Invent 2018. My colleagues on the events team are going all-out to make sure that you, our customer, will have the best possible experience in Las Vegas. After meeting with them, I decided to write this post so that you can have a better understanding of what we have in store, know what to expect, and have time to plan and to prepare.

Dealing with Scale
We started out by talking about some of the challenges that come with scale. Approximately 43,000 people (AWS customers, partners, members of the press, industry analysts, and AWS employees) attended in 2017 and we are expecting an even larger crowd this year. We are applying many of the scaling principles and best practices that apply to cloud architectures to the physical, logistical, and communication challenges that are part-and-parcel of an event that is this large and complex.

We want to make it easier for you to move from place to place, while also reducing the need for you to do so! Here’s what we are doing:

Campus Shuttle – In 2017, hundreds of buses traveled on routes that took them to a series of re:Invent venues. This added a lot of latency to the system and we were not happy about that. In 2018, we are expanding the fleet and replacing the multi-stop routes with a larger set of point-to-point connections, along with additional pick-up and drop-off points at each venue. You will be one hop away from wherever you need to go.

Ride Sharing – We are partnering with Lyft and Uber (both powered by AWS) to give you another transportation option (download the apps now to be prepared). We are partnering with the Las Vegas Monorail and the taxi companies, and are also working on a teleportation service, but do not expect it to be ready in time.

Session Access – We are setting up a robust overflow system that spans multiple re:Invent venues, and are also making sure that the most popular sessions are repeated in more than one venue.

Improved Mobile App – The re:Invent mobile app will be more lively and location-aware. It will help you to find sessions with open seats, tell you what is happening around you, and keep you informed of shuttle and other transportation options.

Something for Everyone
We want to make sure that re:Invent is a warm and welcoming place for every attendee, with business and social events that we hope are progressive and inclusive. Here’s just some of what we have in store:

You can also take advantage of our mother’s rooms, gender-neutral restrooms, and reflection rooms. Check out the community page to learn more!

Getting Ready
Now it is your turn! Here are some suggestions to help you to prepare for re:Invent:

  • Register – Registration is now open! Every year I get email from people I have not talked to in years, begging me for last-minute access after re:Invent sells out. While it is always good to hear from them, I cannot always help, even if we were in first grade together.
  • Watch – We’re producing a series of How to re:Invent webinars to help you get the most from re:Invent. Watch What’s New and Breakout Content Secret Sauce ASAP, and stay tuned for more.
  • Plan – The session catalog is now live! View the session catalog to see the initial list of technical sessions. Decide on the topics of interest to you and to your colleagues, and choose your breakout sessions, taking care to pay attention to the locations. There will be over 2,000 sessions so choose with care and make this a team effort.
  • Pay Attention – We are putting a lot of effort into preparatory content – this blog post, the webinars, and more. Watch, listen, and learn!
  • Train – Get to work on your cardio! You can easily walk 10 or more miles per day, so bring good shoes and arrive in peak condition.

Partners and Sponsors
Participating sponsors are a core part of the learning, networking, and after hours activities at re:Invent.

For APN Partners, re:Invent is the single largest opportunity to interact with AWS customers, delivering both business development and product differentiation. If you are interested in becoming a re:Invent sponsor, read the re:Invent Sponsorship Prospectus.

For re:Invent attendees, I urge you to take time to meet with Sponsoring APN Partners in both the Venetian and Aria Expo halls. Sponsors offer diverse skills, Competencies, services and expertise to help attendees solve a variety of different business challenges. Check out the list of re:Invent Sponsors to learn more.

See You There
Once you are on site, be sure to take advantage of all that re:Invent has to offer.

If you are not sure where to go or what to do next, we’ll have some specially trained content experts to guide you.

I am counting down the days, gearing up to crank out a ton of blog posts for re:Invent, and looking forward to saying hello to friends new and old.

Jeff;

PS – We will be adding new sessions to the session catalog over the summer, so be sure to check back every week!

 

Categories: Cloud

DeepLens Challenge #1 Starts Today – Use Machine Learning to Drive Inclusion

AWS Blog - Tue, 07/10/2018 - 10:12

Are you ready to develop and show off your machine learning skills in a way that has a positive impact on the world? If so, get your hands on an AWS DeepLens video camera and join the AWS DeepLens Challenge!

About the Challenge
Working together with our friends at Intel, we are launching the first in a series of eight themed challenges today, all centered around improving the world in some way. Each challenge will run for two weeks and is designed to help you to get some hands-on experience with machine learning.

We will announce a fresh challenge every two weeks on the AWS Machine Learning Blog. Each challenge will have a real-world theme, a technical focus, a sample project, and a subject matter expert. You have 12 days to invent and implement a DeepLens project that resonates with the theme, and to submit a short, compelling video (four minutes or less) to represent and summarize your work.

We’re looking for cool submissions that resonate with the theme and that make great use of DeepLens. We will watch all of the videos and then share the most intriguing ones.

Challenge #1 – Inclusivity Challenge
The first challenge was inspired by the Special Olympics, which took place in Seattle last week. We invite you to use your DeepLens to create a project that drives inclusion, overcomes barriers, and strengthens the bonds between people of all abilities. You could gauge the physical accessibility of buildings, provide audio guidance using Polly for people with impaired sight, or create educational projects for children with learning disabilities. Any project that supports this theme is welcome.

For each project that meets the entry criteria we will make a donation of $249 (the retail price of an AWS DeepLens) to the Northwest Center, a non-profit organization based in Seattle. This organization works to advance equal opportunities for children and adults of all abilities and we are happy to be able to help them to further their mission. Your work will directly benefit this very worthwhile goal!

As an example of what we are looking for, ASLens is a project created by Chris Coombs of Melbourne, Australia. It recognizes and understands American Sign Language (ASL) and plays the audio for each letter. Chris used Amazon SageMaker and Polly to implement ASLens (you can watch the video, learn more and read the code).

To learn more, visit the DeepLens Challenge page. Entries for the first challenge are due by midnight (PT) on July 22nd and I can’t wait to see what you come up with!

Jeff;

PS – The DeepLens Resources page is your gateway to tutorial videos, documentation, blog posts, and other helpful information.

Categories: Cloud

AWS Heroes – New Categories Launch

AWS Blog - Thu, 07/05/2018 - 11:55

As you may know, in 2014 we launched the AWS Community Heroes program to recognize a vibrant group of AWS experts. These standout individuals use their extensive knowledge to teach customers and fellow-techies about AWS products and services across a range of mediums. As AWS grows, new groups of Heroes emerge.

Today, we’re excited to recognize prominent community leaders by expanding the AWS Heroes program. Unlike Community Heroes (who tend to focus on advocating a wide-range of AWS services within their community), these new Heroes are specialists who focus their efforts and advocacy on a specific technology. Our first new heroes are the AWS Serverless Heroes and AWS Container Heroes. Please join us in welcoming them as the passion and enthusiasm for AWS knowledge-sharing continues to grow in technical communities.

AWS Serverless Heroes

Serverless Heroes are early adopters and spirited pioneers of the AWS serverless ecosystem. They evangelize AWS serverless technologies online and in-person as well as open source contributions to GitHub and the AWS Serverless Application Repository, these Serverless Heroes help evolve the way developers, companies, and the community at large build modern applications. Our initial cohort of Serverless Heroes includes:

Yan Cui

Aleksandar Simovic

Forrest Brazeal

Marcia Villalba

Erica Windisch

Peter Sbarski

Slobodan Stojanović

Rob Gruhl

Michael Hart

Ben Kehoe

Austen Collins

Announcing AWS Container Heroes

Container Heroes are prominent trendsetters who are deeply connected to the ever-evolving container community. They possess extensive knowledge of multiple Amazon container services, are always keen to learn the latest trends, and are passionate about sharing their insights with anyone running containers on AWS. Please meet the first AWS Container Heroes:

 

Casey Lee

Tung Nguyen

Philipp Garbe

Yusuke Kuoka

Mike Fielder

The trends within the AWS community are ever-changing.  We look forward to recognizing a wide variety of Heroes in the future. Stay tuned for additional updates to the Hero program in coming months, and be sure to visit the Heroes website to learn more.

Categories: Cloud

PHP 7.3.0 alpha 3 Released

PHP News - Thu, 07/05/2018 - 02:41
Categories: PHP

AWS Online Tech Talks – July 2018

AWS Blog - Mon, 07/02/2018 - 15:05

Join us this month to learn about AWS services and solutions featuring topics on Amazon EMR, Amazon SageMaker, AWS Lambda, Amazon S3, Amazon WorkSpaces, Amazon EC2 Fleet and more! We also have our third episode of the “How to re:Invent” where we’ll dive deep with the AWS Training and Certification team on Bootcamps, Hands-on Labs, and how to get AWS Certified at re:Invent. Register now! We look forward to seeing you. Please note – all sessions are free and in Pacific Time.

 

Tech talks featured this month:

 

Analytics & Big Data

July 23, 2018 | 11:00 AM – 12:00 PM PT – Large Scale Machine Learning with Spark on EMR – Learn how to do large scale machine learning on Amazon EMR.

July 25, 2018 | 01:00 PM – 02:00 PM PT – Introduction to Amazon QuickSight: Business Analytics for Everyone – Get an introduction to Amazon Quicksight, Amazon’s BI service.

July 26, 2018 | 11:00 AM – 12:00 PM PT – Multi-Tenant Analytics on Amazon EMR – Discover how to make an Amazon EMR cluster multi-tenant to have different processing activities on the same data lake.

 

Compute

July 31, 2018 | 11:00 AM – 12:00 PM PT – Accelerate Machine Learning Workloads Using Amazon EC2 P3 Instances – Learn how to use Amazon EC2 P3 instances, the most powerful, cost-effective and versatile GPU compute instances available in the cloud.

August 1, 2018 | 09:00 AM – 10:00 AM PT – Technical Deep Dive on Amazon EC2 Fleet – Learn how to launch workloads across instance types, purchase models, and AZs with EC2 Fleet to achieve the desired scale, performance and cost.

 

Containers

July 25, 2018 | 11:00 AM – 11:45 AM PT – How Harry’s Shaved Off Their Operational Overhead by Moving to AWS Fargate – Learn how Harry’s migrated their messaging workload to Fargate and reduced message processing time by more than 75%.

 

Databases

July 23, 2018 | 01:00 PM – 01:45 PM PT – Purpose-Built Databases: Choose the Right Tool for Each Job – Learn about purpose-built databases and when to use which database for your application.

July 24, 2018 | 11:00 AM – 11:45 AM PT – Migrating IBM Db2 Databases to AWS – Learn how to migrate your IBM Db2 database to the cloud database of your choice.

 

DevOps

July 25, 2018 | 09:00 AM – 09:45 AM PT – Optimize Your Jenkins Build Farm – Learn how to optimize your Jenkins build farm using the plug-in for AWS CodeBuild.

 

Enterprise & Hybrid

July 31, 2018 | 09:00 AM – 09:45 AM PT – Enable Developer Productivity with Amazon WorkSpaces – Learn how your development teams can be more productive with Amazon WorkSpaces.

August 1, 2018 | 11:00 AM – 11:45 AM PT – Enterprise DevOps: Applying ITIL to Rapid Innovation – Innovation doesn’t have to equate to more risk for your organization. Learn how Enterprise DevOps delivers agility while maintaining governance, security and compliance.

 

IoT

July 30, 2018 | 01:00 PM – 01:45 PM PT – Using AWS IoT & Alexa Skills Kit to Voice-Control Connected Home Devices – Hands-on workshop that covers how to build a simple backend service using AWS IoT to support an Alexa Smart Home skill.

 

Machine Learning

July 23, 2018 | 09:00 AM – 09:45 AM PT – Leveraging ML Services to Enhance Content Discovery and Recommendations – See how customers are using computer vision and language AI services to enhance content discovery & recommendations.

July 24, 2018 | 09:00 AM – 09:45 AM PT – Hyperparameter Tuning with Amazon SageMaker’s Automatic Model Tuning – Learn how to use Automatic Model Tuning with Amazon SageMaker to get the best machine learning model for your datasets, to tune hyperparameters.

July 26, 2018 | 09:00 AM – 10:00 AM PT – Build Intelligent Applications with Machine Learning on AWS – Learn how to accelerate development of AI applications using machine learning on AWS.

 

re:Invent

July 18, 2018 | 08:00 AM – 08:30 AM PT – Episode 3: Training & Certification Round-Up – Join us as we dive deep with the AWS Training and Certification team on Bootcamps, Hands-on Labs, and how to get AWS Certified at re:Invent.

 

Security, Identity, & Compliance

July 30, 2018 | 11:00 AM – 11:45 AM PT – Get Started with Well-Architected Security Best Practices – Discover and walk through essential best practices for securing your workloads using a number of AWS services.

 

Serverless

July 24, 2018 | 01:00 PM – 02:00 PM PT – Getting Started with Serverless Computing Using AWS Lambda – Get an introduction to serverless and how to start building applications with no server management.

 

Storage

July 30, 2018 | 09:00 AM – 09:45 AM PT – Best Practices for Security in Amazon S3 – Learn about Amazon S3 security fundamentals and lots of new features that help make security simple.

Categories: Cloud

AWS Lambda Adds Amazon Simple Queue Service to Supported Event Sources

AWS Blog - Thu, 06/28/2018 - 10:43

We can now use Amazon Simple Queue Service (SQS) to trigger AWS Lambda functions! This is a stellar update with some key functionality that I’ve personally been looking forward to for more than 4 years. I know our customers are excited to take it for a spin so feel free to skip to the walk through section below if you don’t want a trip down memory lane.

SQS was the first service we ever launched with AWS back in 2004, 14 years ago. For some perspective, the largest commercial hard drives in 2004 were around 60GB, PHP 5 came out, Facebook had just launched, the TV show Friends ended, GMail was brand new, and I was still in high school. Looking back, I can see some of the tenets that make AWS what it is today were present even very early on in the development of SQS: fully managed, network accessible, pay-as-you-go, and no minimum commitments. Today, SQS is one of our most popular services used by hundreds of thousands of customers at absolutely massive scales as one of the fundamental building blocks of many applications.

AWS Lambda, by comparison, is a relative new kid on the block having been released at AWS re:Invent in 2014 (I was in the crowd that day!). Lambda is a compute service that lets you run code without provisioning or managing servers and it launched the serverless revolution back in 2014. It has seen immediate adoption across a wide array of use-cases from web and mobile backends to IT policy engines to data processing pipelines. Today, Lambda supports Node.js, Java, Go, C#, and Python runtimes letting customers minimize changes to existing codebases and giving them flexibility to build new ones. Over the past 4 years we’ve added a large number of features and event sources for Lambda making it easier for customers to just get things done. By adding support for SQS to Lambda we’re removing a lot of the undifferentiated heavy lifting of running a polling service or creating an SQS to SNS mapping.

Let’s take a look at how this all works.

Triggering Lambda from SQS

First, I’ll need an existing SQS standard queue or I’ll need to create one. I’ll go over to the AWS Management Console and open up SQS to create a new queue. Let’s give it a fun name. At the moment the Lambda triggers only work with standard queues and not FIFO queues.

Now that I have a queue I want to create a Lambda function to process it. I can navigate to the Lambda console and create a simple new function that just prints out the message body with some Python code like this:

def lambda_handler(event, context): for record in event['Records']: print(record['body'])

Next I need to add the trigger to the Lambda function, which I can do from the console by selecting the SQS trigger on the left side of the console. I can select which queue I want to use to invoke the Lambda and the maximum number of records a single Lambda will process (up to 10, based on the SQS ReceiveMessage API).

Lambda will automatically scale out horizontally to invoke one Lambda function for each new message in my queue. For example, if I had 100 messages in my queue and enabled the trigger then Lambda would execute 100 invocations of my function to process the messages concurrently. As the queue traffic fluctuates the Lambda service will scale the polling operations up and down based on the number of inflight messages. I’ve covered this behavior in more detail in the additional info section at the bottom of this post. If I have my batch size set to 10 then I would invoke 10 new Lambda functions. In order to control this behavior I can increase or decrease the concurrent execution limit for my function.

For each batch of messages processed if the function returns successfully then those messages will be removed from the queue. If the function errors out or times out then the messages will return to the queue after the visibility timeout set on the queue. Just as a quick note here, our Lambda function timeout has to be lower than the queue’s visibility timeout in order to create the event mapping from SQS to Lambda.

After adding the trigger, I can make any other changes I want to the function and save it. If we hop back over to the SQS console we can see the trigger is registered. I can configure and edit the trigger from the SQS console as well.

I’ll use the AWS CLI to enqueue a simple message and test the functionality:

aws sqs send-message --queue-url https://sqs.us-east-2.amazonaws.com/054060359478/SQS_TO_LAMBDA_IS_FINALLY_HERE_YAY --message-body "hello, world"

My Lambda receives the message and executes the code printing the message payload into my Amazon CloudWatch logs.

Of course all of this works with AWS SAM out of the box.

AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: Example of processing messages on an SQS queue with Lambda Resources: MySQSQueueFunction: Type: AWS::Serverless::Function Properties: Runtime: python3.6 CodeUri: src/ Events: MySQSEvent: Type: SQS Properties: Queue: !GetAtt MySqsQueue.Arn BatchSize: 10 MySqsQueue: Type: AWS::SQS::Queue Additional Information

First of all, there are no additional charges for this feature, but because the Lambda service is continuously long-polling the SQS queue the account will be charged for those API calls at the standard SQS pricing rates.

So, a quick deep dive on concurrency and automatic scaling here – just keep in mind that this behavior could change. The automatic scaling behavior of Lambda is designed to keep polling costs low when a queue is empty while simultaneously letting us scale up to high throughput when the queue is being used heavily. When an SQS event source mapping is initially created and enabled, or when messages first appear after a period with no traffic, then the Lambda service will begin polling the SQS queue using five parallel long-polling connections. The Lambda service monitors the number of inflight messages, and when it detects that this number is trending up, it will increase the polling frequency by 20 ReceiveMessage requests per minute and the function concurrency by 60 calls per minute. As long as the queue remains busy it will continue to scale until it hits the function concurrency limits. As the number of inflight messages trends down Lambda will reduce the polling frequency by 10 ReceiveMessage requests per minute and decrease the concurrency used to invoke our function by 30 calls per-minute.

The documentation is up to date with more info than what’s contained in this post. You can find an example SQS event payload there as well. You can find more details from the SQS side in their documentation as well.

As always we’re excited to hear feedback about this feature either on Twitter or in the comments below.

Randall

Categories: Cloud

Amazon Comprehend Launches Asynchronous Batch Operations

AWS Blog - Wed, 06/27/2018 - 16:15

My colleague Jeff Barr last wrote about Amazon Comprehend, a service for discovering insights and relationships in text, when it launched at AWS re:Invent in 2017. Today, after iterating on customer feedback, we’re releasing a new asynchronous batch inferencing feature for Comprehend. Asynchronous batch operations work on documents stored in Amazon Simple Storage Service (S3) buckets and can perform all of the normal Comprehend operations like entity recognition, key phrase extraction, sentiment analysis, and language detection. These new asynchronous batch APIs support significantly larger documents than the single document and batch APIs, reducing the need for customers to truncate their documents for the service. Of course, all of the single document and batch synchronous API operations remain available for real-time results. The addition of asynchronous operations allows developers to choose the tools most suited for their applications. Let’s take a deeper look at the new API.

Asynchronous API Operations

The new batch APIs follow the same asynchronous call structure as Amazon Comprehend’s TopicDetection API. To analyze a collection of documents we first call one of the Start* APIs like StartDominantLanguageDetectionJob, StartEntitiesDetectionJob, StartKeyPhrasesDetectionJob, or StartSentimentDetectionJob.

Each of these APIs take an InputDataConfig and an OutputDataConfig that specify the incoming data format and location as well as where in S3 the results should be stored. The InputDataConfig specifies whether the input data should be treated as one document per file or one document per line.

Additionally we can name the job and include a unique request identifier for synchronization purposes. If we don’t supply these the Comprehend service will automatically generate them.

At the time of this writing the asynchronous operations support individual documents of up to 100KB for entity and key phrase detection, 1MB for language detection, and 5KB for sentiment detection. The total size of all files in the batch must be under 5GB and we cannot submit more than 1 million individual files per batch.

Now that we see what the API is doing let’s take a look at the updated console and start a job!

Amazon Comprehend Analysis Console

First I’ll navigate to the AWS Management Console and open Amazon Comprehend. Next I’ll select the new Analysis console.

From here I can create a new analysis job by clicking the create button in the top right of the console. I’ll create an entities detection job and select English as my document language. Then I’ll point the console at some sampe data.

Now I’ll configure my output data location and make sure the service role has access to that S3 bucket. Then I’ll start the job!

Here I can see the operation started in the console and I can wait until it’s complete to view the detailed results.

Here on the job page I can see the status of the job and the output location. If I download the results from the S3 location I can take a look at the detected entities in the sample text.

I’ve truncated the results here but mostly they look something like this:

{ "Entities": [ { "BeginOffset": 875, "EndOffset": 899, "Score": 0.9936646223068237, "Text": "University of California", "Type": "ORGANIZATION" }, { "BeginOffset": 903, "EndOffset": 911, "Score": 0.9519965648651123, "Text": "Berkeley", "Type": "LOCATION" }, { "BeginOffset": 974, "EndOffset": 992, "Score": 0.9981470108032227, "Text": "Christopher Monroe", "Type": "PERSON" }, { "BeginOffset": 997, "EndOffset": 1010, "Score": 0.9992995262145996, "Text": "Mikhail Lukin", "Type": "PERSON" }, { "BeginOffset": 1095, "EndOffset": 1099, "Score": 0.9990954399108887, "Text": "2017", "Type": "DATE" } ], "File": "Sample.txt", "Line": 8 }

Pretty cool! We could go through similar steps for setniment detection of key phrase detection. The fact that we can submit up to 5GB of data in a single batch means customers will spend less time transforming and truncating their documents.

I personally recommend a tool like AWS Step Functions for checking on the status of jobs programatically. It’s very easy to setup and build programatic analysis pipelines.

You could also use AWS Glue to call comprehend as part of your regular ETL operations as we mentioned in this blog post by Roy Hasson.

Additional Information

You can find detailed information on these new APIs in the documentation and learn more about the limits and best practices.

As previously mentioned the synchronous batch APIs are still available and work best for smaller sets of documents and smaller sizes.

As always don’t hesitate to share your feedback here or on twitter.

Randall

Categories: Cloud

New – Amazon Linux WorkSpaces

AWS Blog - Tue, 06/26/2018 - 11:39

Over two years ago I explained why I Love my Amazon WorkSpace. Today, with well over three years of experience under my belt, I have no reason to return to a local, non-managed desktop. I never have to worry about losing or breaking my laptop, keeping multiple working environments in sync, or planning for disruptive hardware upgrades. Regardless of where I am or what device I am using, I am highly confident that I can log in to my WorkSpace, find the apps and files that I need, and get my work done.

Now with Amazon Linux 2
As a WorkSpaces user, you can already choose between multiple hardware configurations and software bundles. You can choose hardware with the desired amount of compute power (expressed in vCPUs — virtual CPUs) and memory, configure as much storage as you need, and choose between Windows 7 and Windows 10 desktop experiences. If your organization already owns Windows licenses, you can bring them to the AWS Cloud via our BYOL (Bring Your Own License) program.

Today we are giving you another desktop option! You can now launch a WorkSpace that runs Amazon Linux 2, the Amazon Linux WorkSpaces Desktop, Firefox, Evolution, Pidgin, and Libre Office. The Amazon Linux WorkSpaces Desktop is based on MATE. It makes very efficient use of CPU and memory, allowing you to be both productive and frugal. It includes a full set of tools and utilities including a file manager, image editor, and terminal emulator.

Here are a few of the ways that Amazon Linux WorkSpaces can benefit you and your organization:

Development Environment – The combination of Amazon Linux WorkSpaces and Amazon Linux 2 makes for a great development environment. You get all of the AWS SDKs and tools, plus developer favorites such as gcc, Mono, and Java. You can build and test applications in your Amazon Linux WorkSpace and then deploy them to Amazon Linux 2 running on-premises or in the cloud.

Productivity Environment – Libre Office gives you (or the users that you support) access to a complete suite of productivity tools that are compatible with a wide range of proprietary and open source document formats.

Kiosk Support – You can build and economically deploy applications that run in kiosk mode on inexpensive and durable tablets, with centralized management and support.

Linux Workloads – You can run data science, machine learning, engineering, and other Linux-friendly workloads, taking advantage of AWS storage, analytics, and machine learning services.

There are also some operational and financial benefits. On the ops side, organizations that need to provide their users with a mix of Windows and Linux environments can create a unified operations model with a single set of tools and processes that meet the needs of the entire user community. Financially, this new option makes very efficient use of hardware, and the hourly usage model made possible by the AutoStop running mode can further reduce your costs.

Your WorkSpaces run in a Virtual Private Cloud (VPC), and can be configured to access your existing on-premises resources using a VPN connection across a dedicated line courtesy of AWS Direct Connect. You can access and make use of other AWS resources including Elastic File Systems.

Amazon Linux 2 with Long Term Support (LTS)
As part of today’s launch, we are also announcing that Long Term Support (LTS) is now available for Amazon Linux 2. We announced the first LTS candidate late last year, and are now ready to make the actual LTS version available. We will provide support, update, and bug fixes for all core packages for five years, until June 30, 2023. You can do an in-place upgrade from the Amazon Linux 2 LTS Candidate to the LTS release, but you will need to do a fresh installation if you are migrating from the Amazon Linux AMI.

You can run Amazon Linux 2 on your Amazon Linux WorkSpaces cloud desktops, on EC2 instances, in your data center, and on your laptop! Virtual machine images are available for Docker, VMware ESXi, Microsoft Hyper-V, KVM, and Oracle VM VirtualBox.

The extras mechanism in Amazon Linux 2 gives you access to the latest application software in the form of curated software bundles, packaged into topics that contain all of the dependencies needed for the software to run. Over time, as these applications stabilize and mature, they become candidates for the Amazon Linux 2 core channel, and subject to the Amazon Linux 2 Long Term Support policies. To learn more, read about the Extras Library.

To learn more about Amazon Linux 2, read my post, Amazon Linux 2 – Modern, Stable, and Enterprise-Friendly.

Launching an Amazon Linux WorkSpace
In this section, I am playing the role of the WorkSpaces administrator, and am setting up a Linux WorkSpace for my own use. In a real-world situation I would generally be creating WorkSpaces for other members of my organization.

I can launch an Amazon Linux WorkSpace from the AWS Management Console with a couple of clicks. If I am setting up Linux WorkSpaces for an entire team or division, I can also use the WorkSpaces API or the WorkSpaces CLI. I can use my organization’s existing Active Directory or I can have WorkSpaces create and manage one for me. I could also use the WorkSpaces API to build a self-serve provisioning and management portal for my users.

I’m using a directory created by WorkSpaces, so I’ll enter the identifying information for each user (me, in this case), and then click Next Step:

I select one of the Amazon Linux 2 Bundles, choosing the combination of software and hardware that is the best fit for my needs, and click Next Step:

I choose the AutoStop running mode, indicate that I want my root and user volumes to be encrypted, and tag the WorkSpace, then click Next Step:

I review the settings and click Launch WorkSpaces to proceed:

The WorkSpace starts out in PENDING status and transitions to AVAILABLE within 20 minutes:

Signing In
When the WorkSpace is AVAILABLE, I receive an email with instructions for accessing it:

I click the link and set my password:

And then I download the client (or two) of my choice:

I install and launch the client, enter my registration code, and click Register:

And then I sign in to my Amazon Linux WorkSpace:

And here it is:

The WorkSpace is domain-joined to my Active Directory:

Because this is a managed desktop, I can easily modify the size of the root or the user volumes or switch to hardware with more or less power. This is, safe to say, far easier and more cost-effective than making on-demand changes to physical hardware sitting on your users’ desktops out in the field!

Available Now
You can launch Amazon Linux WorkSpaces in all eleven AWS Regions where Amazon WorkSpaces is already available:

Pricing is up to 15% lower than for comparable Windows WorkSpaces; see the Amazon WorkSpaces Pricing page for more info.

If you are new to WorkSpaces, the Amazon WorkSpaces Free Tier will let you run two AutoStop WorkSpaces for up to 40 hours per month, for two months, at no charge.

Jeff;

PS – If you are in San Francisco, join me at the AWS Loft today at 5 PM to learn more (registration is required).

 

Categories: Cloud

ZendCon & OpenEnterprise 2018

PHP News - Tue, 06/26/2018 - 01:41
Categories: PHP

SymfonyCon Lisbon 2018

PHP News - Mon, 06/25/2018 - 05:46
Categories: PHP

SymfonyLive Berlin 2018

PHP News - Mon, 06/25/2018 - 04:29
Categories: PHP

SymfonyLive USA 2018

PHP News - Mon, 06/25/2018 - 04:17
Categories: PHP

PHP 7.1.19 Released

PHP News - Mon, 06/25/2018 - 00:02
Categories: PHP

New Collaborative Editing for Amazon WorkDocs – Powered by Hancom Thinkfree Office Online

AWS Blog - Thu, 06/21/2018 - 12:36

I’ve got some important news for Amazon WorkDocs users. As a result of our partnership with Hancom, you can now edit Microsoft Office documents in your browser without having to install any applications or connect with another web service. You can quickly create a document, share it with team members, and let them make changes and contribute to the finished product. Everyone can see changes in real-time as they work together, regardless of where they are located or what device they are using to access WorkDocs.

This feature is available at no extra charge and you can start using it as soon as your WorkDocs administrator enables it. Let’s take a tour!

Collaborative Editing
I start by creating a document, spreadsheet, or presentation using the New menu. I’ll create a document:

I can create and edit my document from the comfort of my web browser:

Then I save and rename it (a default name is generated using the creation time as a starting point):

Next, I share it with my colleague Manoj so that he can take a look and make any desired edits:

I can see his edits in real-time:

And I can see all of the participants in the collaborative editing session:

WorkDocs creates a new revision after all of the participants have exited the editing session.

I can also create new spreadsheets and presentations and edit existing ones! Here’s a new spreadsheet:

And here’s an existing presentation (I opened one from 2008 just for fun):

Now Available
This feature is available now in the US West (Oregon) Region and will become available in other regions in the next couple of weeks. It is available at no extra charge to all WorkDocs users.

Jeff;

Categories: Cloud

PHP 7.2.7 Released

PHP News - Thu, 06/21/2018 - 08:12
Categories: PHP

PHP 7.3.0 alpha 2 Released

PHP News - Thu, 06/21/2018 - 02:46
Categories: PHP

Amazon EC2 Update – Additional Instance Types, Nitro System, and CPU Options

AWS Blog - Tue, 06/19/2018 - 08:47

I have a backlog of EC2 updates to share with you. We’ve been releasing new features and instance types at a rapid clip and it is time to catch up. Here’s a quick peek at where we are and where we are going…

Additional Instance Types
Here’s a quick recap of the most recent EC2 instance type announcements:

Compute-Intensive – The compute-intensive C5d instances provide a 25% to 50% performance improvement over the C4 instances. They are available in 5 regions and offer up to 72 vCPUs, 144 GiB of memory, and 1.8 TB of local NVMe storage.

General Purpose – The general purpose M5d instances are also available in 5 regions. They offer up to 96 vCPUs, 384 GiB of memory, and 3.6 TB of local NVMe storage.

Bare Metal – The i3.metal instances became generally available in 5 regions a couple of weeks ago. You can run performance analysis tools that are hardware-dependent, workloads that require direct access to bare-metal infrastructure, applications that need to run in non-virtualized environments for licensing or support reasons, and container environments such as Clear Containers, while you take advantage of AWS features such as Elastic Block Store (EBS), Elastic Load Balancing, and Virtual Private Clouds. Bare metal instances with 6 TB, 9 TB, 12 TB, and more memory are in the works, all designed specifically for SAP HANA and other in-memory workloads.

Innovation and the Nitro System
The Nitro system is a rich collection of building blocks that can be assembled in many different ways, giving us the flexibility to design and rapidly deliver EC2 instance types with an ever-broadening selection of compute, storage, memory, and networking options. We will deliver new instance types more quickly than ever in the months to come, with the goal of helping you to build, migrate, and run even more types of workloads.

Local NVMe Storage – The new C5d, M5d, and bare metal EC2 instances feature our Nitro local NVMe storage building block, which is also used in the Xen-virtualized I3 and F1 instances. This building block provides direct access to high-speed local storage over a PCI interface and transparently encrypts all data using dedicated hardware. It also provides hardware-level isolation between storage devices and EC2 instances so that bare metal instances can benefit from local NVMe storage.

Nitro Security Chip – A component that is part of our AWS server designs that continuously monitors and protects hardware resources and independently verifies firmware each time a system boots.

Nitro Hypervisor – A thin, quiescent hypervisor that manages memory and CPU allocation, and delivers performance that is indistinguishable from bare metal for most workloads (Brendan Gregg of Netflix benchmarked it at less than 1%).

Networking – Hardware support for the software defined network inside of each Virtual Private Cloud (VPC), Enhanced Networking, and Elastic Network Adapter.

Elastic Block Storage – Hardware EBS processing including CPU-intensive cryptographic operations.

Moving storage, networking, and security functions to hardware has important consequences for both bare metal and virtualized instance types:

Virtualized instances can make just about all of the host’s CPU power and memory available to the guest operating systems since the hypervisor plays a greatly diminished role.

Bare metal instances have full access to the hardware, but also have the same the flexibility and feature set as virtualized EC2 instances including CloudWatch metrics, EBS, and VPC.

To learn more about the hardware and software that make up the Nitro system, watch Amazon EC2 Bare Metal Instances or C5 Instances and the Evolution of Amazon EC2 Virtualization and take a look at The Nitro Project: Next-Generation EC2 Infrastructure.

CPU Options
This feature provides you with additional control over your EC2 instances and lets you optimize your instance for a particular workload. First, you can specify the desired number of vCPUs at launch time. This allows you to control the vCPU to memory ratio for Oracle and SQL Server workloads that need high memory, storage, and I/O but perform well with a low vCPU count. As a result, you can optimize your vCPU-based licensing costs when you Bring Your Own License (BYOL). Second, you can disable Intel® Hyper-Threading Technology (Intel® HT Technology) on instances that run compute-intensive workloads. These workloads sometimes exhibit diminished performance when Intel HT is enabled. Both of these options are available when you launch an instance using the AWS Command Line Interface (CLI) or one of the AWS SDKs. You simply specify the total number of cores and the number of threads per core using values chosen from the CPU Cores and Threads per CPU Core Per Instance Type table. Here’s how you would launch an instance with 6 CPU cores and Intel® HT Technology disabled:

$ aws ec2 run-instances --image-id ami-1a2b3c4d --instance-type r4.4xlarge --cpu-options "CoreCount=6,ThreadsPerCore=1"

To learn more, read about Optimizing CPU Options.

Help Wanted
The EC2 team is always hiring! Here are a few of their open positions:

Jeff;

Categories: Cloud

Pages

Subscribe to LAMP, Database and Cloud Technical Information aggregator


Main menu 2

by Dr. Radut