Jump to Navigation

Feed aggregator

Dutch PHP Conference - CfP is open!

PHP News - Tue, 11/20/2018 - 06:00
Categories: PHP

New – Amazon Route 53 Resolver for Hybrid Clouds

AWS Blog - Mon, 11/19/2018 - 15:20

I distinctly remember the excitement I felt when I created my first Virtual Private Cloud (VPC) as a customer. I had just spent months building a similar environment on-premises and had been frustrated at the complicated setup. One of the immediate benefits that the VPC provided was a magical address at 10.0.0.2 where our EC2 instances sent Domain Name Service (DNS) queries. It was reliable, scaled with our workloads, and resolved both public and private domains without any input from us.

 

Like a lot of customers, we connected our on-premises environment with our AWS one via Direct Connect (DX), leading to cases where DNS names required resolution across the connection. Back then we needed to build DNS servers and provide forwarders to achieve this. That’s why today I am very excited to announce Amazon Route 53 Resolver for Hybrid Clouds. It’s a set of features that enable bi-directional querying between on-premises and AWS over private connections.

 

Before I dive into the new functionality, I would like to provide a shout out to our old faithful .2 resolver. As part of our announcement today I would like to let you know that we have officially named the .2 DNS resolver – Route 53 Resolver, in honor of the trillions of queries the service has resolved on behalf of our customers. Route 53 Resolver continues to provide DNS query capability for your VPC, free of charge. To support DNS queries across hybrid environments, we are providing two new capabilities: Route 53 Resolver Endpoints for inbound queries and Conditional Forwarding Rules for outbound queries.

 

Route 53 Resolver Endpoints

Inbound query capability is provided by Route 53 Resolver Endpoints, allowing DNS queries that originate on-premises to resolve AWS hosted domains. Connectivity needs to be established between your on-premises DNS infrastructure and AWS through a Direct Connect (DX) or a Virtual Private Network (VPN). Endpoints are configured through IP address assignment in each subnet for which you would like to provide a resolver.

 

Conditional Forwarding Rules

Outbound DNS queries are enabled through the use of Conditional Forwarding Rules. Domains hosted within your on-premises DNS infrastructure can be configured as forwarding rules in Route 53 Resolver. Rules will trigger when a query is made to one of those domains and will attempt to forward DNS requests to your DNS servers that were configured along with the rules. Like the inbound queries, this requires a private connection over DX or VPN.

 

When combined, these two capabilities allow for recursive DNS lookup for your hybrid workloads. This saves you from the overhead of managing, operating and maintaining additional DNS infrastructre while operating both environments.

 

Route 53 Resolver in Action

1. Route 53 Resolver for Hybrid Clouds is region specific, so our first step is to choose the region we would like to configure our hybrid workloads. Once we have selected a region, we choose the query direction – outbound, inbound or both.

 

2. We have selected both inbound and outbound traffic for this workload. First up is our inbound query configuration. We enter a name and choose a VPC. We assign one or more subnets from within the VPC (in this case we choose two for availability). From these subnets we can assign specific IP addresses to use as our endpoints, or let Route 53 Resolver assign them automatically.

3. We create a rule for our on-premises domain so that workloads inside the VPC can route DNS queries to your DNS infrastructure. We enter one or more IP addresses for our on-premises DNS servers and create our rule.

4. Everything is created and our VPC is associated with our inbound and outbound rules and can start routing traffic. Conditional Forwarding Rules can be shared across multiple accounts using AWS Resource Access Manager.

Availability and Pricing

Route 53 Resolver remains free for DNS queries served within your VPC. Resolver Endpoints use Elastic Network Interfaces (ENIs) costing $0.125 per hour. DNS queries that are resolved by a Conditional Forwarding Rule or a Resolver Endpoint cost $0.40 per million queries up to the first billion and $0.20 per million after that. Route 53 Resolver for Hybrid Cloud is available today in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Asia Pacific (Sydney), Asia Pacific (Tokyo) and Asia Pacific (Singapore), with other commercial regions to follow.

 

-Shaun

Categories: Cloud

AWS Quest 2: Reaching Las Vegas

AWS Blog - Fri, 11/16/2018 - 11:23

Hey AWS Questers and puzzlehunters! We’ve reached the last day of AWS Quest: The Road to re:Invent! Ozz has made it from Seattle to Las Vegas—after taking the long way via Sydney, Tokyo, Beijing, Seoul, Singapore, Mumbai, Stockholm, Cape Town, Paris, London, Sao Paulo, New York City, Toronto, and Mexico City. Now in Vegas, Ozz plans to meet up with a new robotic friend for re:Invent 2018! This is a very special guest for re:Invent, and you might even have a chance to meet that friend at the conference if you’re attending. But first, you need to wrap up this hunt by finding a final answer. This answer is a little different from the previous ones: It’s an instruction to Ozz on how to find this new friend.

To uncover the final solution, you’ll need the answers to the puzzles so far. Here’s what we’ve done to date: Ozz started this journey in a coffee shop in Seattle. Ozz then met a bouncy animal friend in Sydney, sampled sushi in Tokyo, and got control of the nozzles on the Banpo Bridge in Seoul. The little robot then found many terra cotta warriors in Beijing, met a wordy merlion in Singapore, and tasted the spices of Mumbai.

After a quick shopping trip in Stockholm, Ozz investigated the music of Cape Town, did a whole lot more shopping in Paris, and received a letter from a clockwork friend in London. Then, it was off across the Atlantic to São Paulo where Ozz engaged in some healthy capoeira. Our robot hero went to New York City and toured the skyscrapers, had a puck-shaped hockey treat in Toronto, and rode the roller coasters at Chapultepec Park in Mexico City. Along the way, with your help, we decoded the puzzles and got 15 different postcards of Ozz with special souvenirs from each city!

After reaching Las Vegas, Ozz has passed on this message to this new robotic friend:

“Boop boop boop beeeep beeeep. Beeeep beeeep boop boop boop. Beeeep boop boop boop boop. Beeeep boop boop boop boop. Boop boop boop beeeep beeeep. Beeeep beeeep beeeep beeeep boop. Beeeep beeeep beeeep boop boop. Boop boop boop beeeep beeeep. Boop boop boop boop boop. Boop boop boop beeeep beeeep. Boop beeeep beeeep beeeep beeeep boop boop beeeep beeeep beeeep. Beeeep beeeep boop boop boop. Boop boop boop boop boop. Beeeep beeeep boop boop boop. Boop beeeep beeeep beeeep beeeep!”

Well, that didn’t make much sense. But it’s likely just another puzzling example of our little robot’s sense of humor. Join the AWS Slack community in solving the puzzle and then type the solution into the submission page. If correct, you’ll see the final postcard from Ozz.

If you’ve been playing along and managed to solve the final puzzle, be sure to tweet at me and Jeff if you’ll be at re:Invent. We have Ozz pins to give out as well as a few other treats. You can also get an Ozz pin by visiting the Swag Booth at the Venetian and letting them know you’re an AWS blog reader.

Thanks for playing AWSQuest! For more puzzling fun, visit the Camp re:Invent Trivia Challenge with Jeff Barr at 7 PM on November 28th in the Venetian Theatre.

Categories: Cloud

Announcing AWS Machine Learning Heroes (plus new AWS Community Heroes)

AWS Blog - Fri, 11/16/2018 - 10:57

The AWS Heroes program helps developers find inspiration and build skills from community leaders who have extensive AWS knowledge and a passion for sharing their expertise with others. The program continues to evolve to align with technology trends and recognize community leaders who focus on specific technical disciplines.

Today we are excited to launch a new category of AWS Heroes: AWS Machine Learning Heroes.

 

Introducing AWS Machine Learning Heroes
AWS Machine Learning Heroes are developers and academics who are passionate enthusiasts of emerging AI/ML technologies. Proficient with deep learning frameworks such as MXNet, PyTorch, and Tensorflow, they are early adopters of Amazon ML technologies and enjoy teaching others how to use machine learning APIs such as Amazon Rekognition (computer vision) and Amazon Comprehend (natural language processing).

Developers from beginner to advanced ML proficiency can learn and apply ML at speed and scale through Hero blog posts, videos, sessions, as well as direct engagement. Our initial cohort of Machine Learning Heroes includes:

 

Agustinus Nalwan – Melbourne, Australia

Agustinus (aka Gus) is the Head of AI at Carsales. He has extensive experience in Deep Learning, setting up distributed training EC2 clusters for deep learning on AWS, and is an advocate of Amazon SageMaker to simplify the machine learning pipeline.

 

 

 

 

 

 

Cyrus Wong – Hong Kong

Cyrus Wong is a Data Scientist at the IT Department of the Hong Kong Institute of Vocational Education. He has achieved all 9 AWS Certifications and builds AI/ML projects with his students using Amazon Rekognition, Amazon Lex, and Amazon Polly, and Amazon Comprehend.

 

 

 

 

 

 

Gillian McCann – Belfast, United Kingdom

Gillian is Head of Cloud Engineering & AI at Workgrid Software. A passionate advocate of cloud native architecture, Gillian leads a team who explores how AWS conversational AI can be leveraged to improve the employee experience.

 

 

 

 

 

 

Matthew Fryer – London, United Kingdom

Matt leads a team who develops new data science/algorithm functions at Hotels.com and the Expedia Affiliate Network. He has spoken at AWS Summits and other conferences on why machine learning is important to Hotels.com.

 

 

 

 

 

 

Sung Kim – Seoul, South Korea

Sung is an Associate Professor of Computer Science at the Hong Kong University of Science and Technology. His online deep learning course, including how to use AWS ML services has more than 4M views and 27K subscribers.

 

 

 

 

 

 

Please meet our latest AWS Community Heroes
Also this month we are excited to introduce you to four new AWS Community Heroes:

John Varghese – Mountain View, USA

John is a Cloud Steward at Intuit responsible for the AWS infrastructure of Intuit’s Futures Group. He runs the AWS Bay Area meetup in the San Francisco Peninsula and has organized multiple AWS Community Day events in the Bay Area.

 

 

 

 

 

 

Serhat Can – Istanbul, Turkey

Serhat is a Technical Evangelist at Atlassian. He is a community organizer as well as a speaker. As a Devopsdays core team member he helps local DevOps communities organize events in 70+ countries and counting.

 

 

 

 

 

 

Bryan Chasko – Las Cruces, USA

Bryan is Chief Technology Officer at Electronic Caregiver. A Solutions Architect and Big Data specialist, Bryan uses Amazon Sumerian to apply Augmented and Virtual Reality based solutions to real world business challenges.

 

 

 

 

 

 

Sathyajith Bhat – Bangalore, India

Sathyajith Bhat is a DevOps Engineer for Adobe I/O. He is the author of Practical Docker with Python; and organizer of AWS Bangalore Users Group Meetup, AWS Community Day Bangalore, and Barcamp Bangalore.

 

 

 

 

 

 
To learn more about the AWS Heroes program or to connect with an AWS Hero in your community, click here.

Categories: Cloud

Some Unique Sessions at re:Invent 2018

AWS Blog - Fri, 11/16/2018 - 06:11

We recently added three unique breakout sessions to the re:Invent Session Catalog and I want to make sure that you are aware of them.

It’s rare for Distinguished Engineers like Peter Vosshall, Principal Engineers like Colm MacCarthaigh, and Directors and VPs responsible for entire AWS services to speak within a three day period. So, you should take this opportunity to hear from Peter and Colm, and from Deepak Singh (AWS Containers), David Richardson (Serverless), and Ken Exner (Developer Tools) at re:Invent 2018.

Grab a seat at How AWS Minimizes the Blast Radius of Failures to hear Peter Vosshall speak candidly about the philosophies that guide operations at AWS and the techniques AWS uses to reduce the blast radius of systems failures.

Join Closing Loops and Opening Minds: How to Take Control of Systems, Big and Small to deep dive into the theories behind AWS control plane design with Colm MacCarthaigh.

Or, sit in while the AWS leaders behind Containers, Serverless, and Developer Tools discuss the changes to architectural patterns, operational models, and software delivery that takes place on the journey from monolith to microservices in their joint Leadership Session: Using DevOps, Microservices, and Serverless to Accelerate Innovation.

Seats are still available and you still have a chance to get one. Reserve yours before it is too late.

See you in Vegas!

Jeff;

Categories: Cloud

Amazon S3 Block Public Access – Another Layer of Protection for Your Accounts and Buckets

AWS Blog - Thu, 11/15/2018 - 20:40

Newly created Amazon S3 buckets and objects are (and always have been) private and protected by default, with the option to use Access Control Lists (ACLs) and bucket policies to grant access to other AWS accounts or to public (anonymous) requests. The ACLs and policies give you lots of flexibility. You can grant permissions to multiple accounts, restrict access to specific IP addresses, require the use of Multi-Factor Authentication (MFA), allow other accounts to upload new objects to a bucket, and much more.

We want to make sure that you use public buckets and objects as needed, while giving you tools to make sure that you don’t make them publicly accessible due to a simple mistake or misunderstanding. For example, last year we provided you with a Public indicator to let you know at a glance which buckets are publicly accessible:

The bucket view is sorted so that public buckets appear at the top of the page by default.

We also made Trusted Advisor‘s bucket permission check free:

New Amazon S3 Block Public Access
Today we are making it easier for you to protect your buckets and objects with the introduction of Amazon S3 Block Public Access. This is a new level of protection that works at the account level and also on individual buckets, including those that you create in the future. You have the ability to block existing public access (whether it was specified by an ACL or a policy) and to ensure that public access is not granted to newly created items. If an AWS account is used to host a data lake or another business application, blocking public access will serve as an account-level guard against accidental public exposure. Our goal is to make clear that public access is to be used for web hosting!

This feature is designed to be easy to use, and can be accessed from the S3 Console, the CLI, the S3 APIs, and from within CloudFormation templates. Let’s start with the S3 Console and a bucket that is public:

I can exercise control at the account level by clicking Public access settings for this account:

I have two options for managing public ACLs and two for managing public bucket policies. Let’s take a closer look at each one:

Block new public ACLs and uploading public objects – This option disallows the use of new public bucket or object ACLs, and is used to ensure that future PUT requests that include them will fail. It does not affect existing buckets or objects. Use this setting to protect against future attempts to use ACLs to make buckets or objects public. If an application tries to upload an object with a public ACL or if an administrator tries to apply a public access setting to the bucket, this setting will block the public access setting for the bucket or the object.

Remove public access granted through public ACLs – This option tells S3 not to evaluate any public ACL when authorizing a request, ensuring that no bucket or object can be made public by using ACLs. This setting overrides any current or future public access settings for current and future objects in the bucket. If an existing application is currently uploading objects with public ACLs to the bucket, this setting will override the setting on the object.

Block new public bucket policies – This option disallows the use of new public bucket policies, and is used to ensure that future PUT requests that include them will fail. Again, this does not affect existing buckets or objects. This setting ensures that a bucket policy cannot be updated to grant public access.

Block public and cross-account access to buckets that have public policies – If this option is set, access to buckets that are publicly accessible will be limited to the bucket owner and to AWS services. This option can be used to protect buckets that have public policies while you work to remove the policies; it serves to protect information that is logged to a bucket by an AWS service from becoming publicly accessible.

To make changes, I click Edit, check the desired public access settings, and click Save:

I recommend that you use these settings for any account that is used for internal AWS applications!

Then I confirm my intent:

After I do this, I need to test my applications and scripts to ensure that everything still works as expected!

When I make these settings at the account level, they apply to my current buckets, and also to those that I create in the future. However, I can also set these options on individual buckets if I want to take a more fine-grained approach to access control. If I set some options at the account level and others on a bucket, the protections are additive. I select a bucket and click Edit public access settings:

Then I select the desired options:

Since I have already denied all public access at the account level, this is actually redundant, but I want you to know that you have control at the bucket level. One thing to note: I cannot override an account-level setting by changing the options that I set at the bucket level.

I can see the public access status of all of my buckets at a glance:

Programmatic Access
I can also access this feature by making calls to the S3 API. Here are the functions:

GetPublicAccessBlock – Retrieve the public access block options for an account or a bucket.

PutPublicAccessBlock – Set the public access block options for an account or a bucket.

DeletePublicAccessBlock – Remove the public access block options from an account or a bucket.

GetBucketPolicyStatus – See if the bucket access policy is public or not.

I can also set the options for a bucket when I create it via a CloudFormation template:

{ "Type":"AWS::S3::Bucket", "Properties":{ "PublicAccessBlockConfiguration":{ "BlockPublicAcls":true, "IgnorePublicAcls":false, "BlockPublicPolicy":true, "RestrictPublicBuckets":true } } }

Things to Know
Here are a couple of things to keep in mind when you are making use of S3 Block Public Access:

New Buckets – Going forward, buckets that you create using the S3 Console will have all four of the settings enabled, as recommended for any application other than web hosting. You will need to disable one or more of the settings in order to make the bucket public.

Automated Reasoning – The determination of whether a given policy or ACL is considered public is made using our Zelkova Automated Reasoning system (you can read How AWS Uses Automated Reasoning to Help You Achieve Security at Scale to learn more).

Organizations – If you are using AWS Organizations, you can use a Service Control Policy (SCP) to restrict the settings that are available to the AWS account within the organization. For example, you can set the desired public access settings for any desired accounts and then use an SCP to ensure that the settings cannot be changed by the account owners.

Charges – There is no charge for the use of this feature; you pay the usual prices for all requests that you make to the S3 API.

Available Now
Amazon S3 Block Public Access is available now in all commercial AWS regions and you can (and should) start using it today!

Jeff;

Categories: Cloud

New – Train Custom Document Classifiers with Amazon Comprehend

AWS Blog - Thu, 11/15/2018 - 16:06

Amazon Comprehend gives you the power to process natural-language text at scale (read my introductory post, Amazon Comprehend – Continuously Trained Natural Language Processing, to learn more). After launching late 2017 with support for English and Spanish, we have added customer-driven features including Asynchronous Batch Operations, Syntax Analysis, support for additional languages (French, German, Italian, and Portuguese), and availability in more regions.

Using automatic machine learning (AutoML), Comprehend lets you create custom Natural Language Processing (NLP) models using data that you already have, without the need to learn the ins and outs of ML. Based on your data set and use case, it automatically selects the right algorithm, tuning parameter, builds, and tests the resulting model.

If you already have a collection of tagged documents—support tickets, call center conversations (via Amazon Transcribe, forum posts, and so forth)— you can use them as a starting point. In this context, tagged simply means that you have examined each document and assigned a label that characterizes it in the desired way. Custom Classification needs at least 50 documents for each label, but can do an even better job if it has hundreds or thousands.

In this post I will focus on Custom Classification, and will show you how to train a model that separates clean text from text that contains profanities. Then I will show you how to use the model to classify new text.

Using Classifiers
My starting point is a CSV file of training text that looks like this (I blurred all of the text; trust me that there’s plenty of profanity):

The training data must reside in an S3 object, with one label and one document per line:

Next, I navigate to the Amazon Comprehend Console and click Classification. I don’t have any existing classifiers, so I click Create classifier to make one:

I name my classifier and select a language for my documents, choose the S3 bucket where my training data resides, and then create an AWS Identity and Access Management (IAM) role that has permission to access the bucket. Then I click Create classifier to proceed:

The training process begins right away:

The status changes to Trained within minutes, and now I am ready to create an analysis job to classify some text, some of it also filled with profanity:

I put this text into another S3 bucket, click Analysis in the console, and click Create job. Then I give my job a name, choose Custom classification as the Analysis type, and select the classifier that I just built. I also point to the input bucket (with the file above), and another bucket that will receive the results, classified per my newly built classifier, and click Create job to proceed (important safety tip: if you use the same S3 bucket for the source and the destination, be sure to reference the input document by name):

The job begins right away, and also takes just minutes to complete:

The results are stored in the S3 bucket that I selected when I created the job:

Each line of output corresponds to a document in the input file:

Here’s a detailed look at one line:

{ "File":"profanity_test.csv", "Line":"0", "Classes":[ { "Name":"PROFANITY", "Score":1.0 }, { "Name":"NON_PROFANITY", "Score":0.0 } ] }

As you can see, the new Classification Service is powerful and easy to use. I was able to get useful, high-quality results in minutes without knowing anything about Machine Learning.

By the way, ou can also train and test models using the Amazon Comprehend CLI and the Amazon Comprehend APIs.

Available Now
Amazon Comprehend Classification Service is available today, in all regions where Comprehend is available.

Jeff;

Categories: Cloud

New – EC2 Auto Scaling Groups With Multiple Instance Types & Purchase Options

AWS Blog - Wed, 11/14/2018 - 19:40

Earlier this year I told you about EC2 Fleet, an AWS building block that makes it easy for you to create fleets that are built from a combination of EC2 On-Demand, Reserved, and Spot Instances that span multiple EC2 instance types. In that post I showed you how to create a fleet and walked through an example that created a genomics processing pipeline that used a mix of M4 and M5 instances. I also dropped a hint to let you know that we were working on integrating EC2 Fleet with Auto Scaling and other AWS services.

Auto Scaling Across Multiple Instance Types & Purchase Options
Today I am happy to let you know that you can now create Auto Scaling Groups that grow and shrink in response to changing conditions, while also making use of the most economical combination of EC2 instance types and pricing models. You have full control of the instance types that will be used to build your group, along with the ability to control the mix of On-Demand and Spot. You can also update your existing Auto Scaling Groups to take advantage of this new feature.

The Auto Scaling Groups that you create are optimized anew each time a scale-out or scale-in event takes place, always seeking the lowest overall cost while meeting the other requirements set by your configuration. You can modify the configuration as newer instance types become available, allowing you to create a group that evolves in step with EC2.

Creating an Auto Scaling Group
I can create an Auto Scaling Group from the EC2 Console, CLI, or API. The first step is to make sure that I have a suitable Launch Template (it should not specify the use of Spot Instances). Here’s mine:

Then I navigate to my Auto Scaling Groups and click Create Auto Scaling group:

I click Launch Template, select my ProdWebServer template, and click Next Step to proceed:

I name my group and select Combine purchase models and instances to unlock the new functionality:

Now I select the instance types that I want to use. The list is prioritized: instances at the top of the list will be used in preference to those lower down when On-Demand instances are launched. My app will run fine on M4 or M5 instances with 2 or more vCPUs:

I can accept the default settings for my group’s composition or I can set them myself by unchecking Use default:

Here’s what I can do:

Maximum Spot Price – Sets the maximum Spot price that I want to pay. The default setting caps this bid at the On-Demand price.

Spot Allocation Strategy – Control the amount of per-AZ diversity for the Spot Instances. A larger number adds some flexibility at times when a particular instance type is in high demand within an AZ.

Optional On-Demand Base  – Controls how much of the initial capacity is made up of On-Demand Instances. Keeping this set to 0 indicates that I prefer to launch On-Demand Instances as a percentage of the total group capacity that is running at any given time.

On-Demand Percentage Above Base – Controls the percentage of the add-on to the initial group that is made up of On-Demand Instances versus the percentage that is made up of Spot Instances.

As you can see, I have full control over how my group is built. I leave them all as-is, set my group to start with 4 instances, choose my VPC subnets, and click Next to set up my scaling policies, as usual:

I disable scale-in for demo purposes (you don’t need to do this for your group):

I click past the Configure Notifications, and indicate that I want to tag my group and the EC2 instances in it:

Then I review my settings and click Create Auto Scaling Group to move ahead:

My initial group of four instances is ready to go within minutes:

I can filter by tag in the EC2 Console and display the Lifecycle column to see the mix of On-Demand and Spot Instances:

I can modify my Auto Scaling Group, reducing the On-Demand Percentage to 20% and doubling the Desired Capacity (this is my demo-mode way of showing you what happens when the group scales out):

The changes take effect within minutes; new Spot Instances are launched, some of the existing On-Demand Instances are terminated, and the composition of my group reflects the new settings:

Here are a couple of things to keep in mind when you start to use this cool new feature:

Reserved Instances – We plan to add support for the preferential use of Reserved Instances in the near future. Today, if you own Reserved Instances, specify their instance types as early as possible in the list I showed you earlier. Your discounts will apply to any On-Demand instances that match available Reserved Instances.

Weight – All instance types have the same weight; we plan to give you the ability to specify weights in the near future. This will allow you to specify custom capacity units for each instance using either memory or vCPUs, and to specify the overall desired capacity in the same units.

Cost – The feature itself is available to you at no charge. If you switch part or all of your Auto Scaling Groups over to Spot Instances, you may be able to save up to 90% when compared to On-Demand Instances.

ECS and EKS – If you are running Amazon ECS or Amazon Elastic Container Service for Kubernetes on a cluster that makes use of an Auto Scaling Group, you can update the group to make use of multiple instance types and purchase options.

Available Now
This feature is available now and you can start using today in all commercial AWS regions!

Jeff;

Categories: Cloud

New – CloudFormation Drift Detection

AWS Blog - Tue, 11/13/2018 - 13:17

AWS CloudFormation supports you in your efforts to implement Infrastructure as Code (IaC). You can use a template to define the desired AWS resource configuration, and then use it to launch a CloudFormation stack. The stack contains the set of resources defined in the template, configured as specified. When you need to make a change to the configuration, you update the template and use a CloudFormation Change Set to apply the change. Your template completely and precisely specifies your infrastructure and you can rest assured that you can use it to create a fresh set of resources at any time.

That’s the ideal case! In reality, many organizations are still working to fully implement IaC. They are educating their staff and adjusting their processes, both of which take some time. During this transition period, they sometimes end up making direct changes to the AWS resources (and their properties) without updating the template. They might make a quick out-of-band fix to change an EC2 instance type, fix an Auto Scaling parameter, or update an IAM permission. These unmanaged configuration changes become problematic when it comes time to start fresh. The configuration of the running stack has drifted away from the template and is no longer properly described by it. In severe cases, the change can even thwart attempts to update or delete the stack.

New Drift Detection
Today we are announcing a powerful new drift detection feature that was designed to address the situation that I described above. After you create a stack from a template, you can detect drift from the Console, CLI, or from your own code. You can detect drift on an entire stack or on a particular resource, and see the results in just a few minutes. You then have the information necessary to update the template or to bring the resource back into compliance, as appropriate.

When you initiate a check for drift detection, CloudFormation compares the current stack configuration to the one specified in the template that was used to create or update the stack and reports on any differences, providing you with detailed information on each one.

We are launching with support for a core set of services, resources, and properties, with plans to add more over time. The initial list of resources spans API Gateway, Auto Scaling, CloudTrail, CloudWatch Events, CloudWatch Logs, DynamoDB, Amazon EC2, Elastic Load Balancing, IAM, AWS IoT, Lambda, Amazon RDS, Route 53, Amazon S3, Amazon SNS, Amazon SQS, and more.

You can perform drift detection on stacks that are in the CREATE_COMPLETE, UPDATE_COMPLETE, UPDATE_ROLLBACK_COMPLETE, and UPDATE_ROLLBACK_FAILED states. The drift detection does not apply to other stacks that are nested within the one you check; you can do these checks yourself instead.

Drift Detection in Action
I tested this feature on the simple stack that I used when I wrote about Provisioned Throughput for Amazon EFS. I simply select the stack and choose Detect drift from the Action menu:

I confirm my intent and click Yes, detect:

Drift detection starts right away; I can Close the window while it runs:

After it completes I can see that the Drift status of my stack is IN_SYNC:

I can also see the drift status of each checked resource by taking a look at the Resources tab:

Now, I will create a fake change by editing the IAM role, adding a new policy:

I detect drift a second time, and this time I find (not surprise) that my stack has drifted:

I click View details, and I inspect the Resource drift status to learn more:

I can expand the status line for the modified resource to learn more about the drift:

Available Now
This feature is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (São Paulo) Regions. As I noted above, we are launching with support for a strong, initial set of resources, and plan to add many more in the months to come.

Jeff;

 

Categories: Cloud

In the Works – AWS Region in Milan, Italy

AWS Blog - Tue, 11/13/2018 - 00:17

Late last month I announced that we are working on an AWS Region in South Africa. Today I would like to let you know that we are also building an AWS Region in Italy and plan to open it up in early 2020.

Milan in 2020
The upcoming Europe (Milan) Region will have three Availability Zones and will be our sixth region in Europe, joining the existing regions in France, Germany, Ireland, the UK, and the new region in Sweden that is set to launch later this year. We currently have 57 Availability Zones in 19 geographic regions worldwide, and another 15 Availability Zones across five regions in the works for launch between now and the first half of 2020 (check out the AWS Global Infrastructure page for more info). Like all of our existing regions, this one is designed and built to meet the most rigorous compliance standards and to provide the highest level of security for AWS customers.

AWS in Italy
AWS customers in Italy have been using our existing regions for more than a decade. Hot startups, enterprises, and public sector organizations in Italy are all running their mission-critical applications on the AWS Cloud. Here’s a tasting menu to give you an idea of what’s already happening:

Ferrero is one of the world’s largest chocolate manufacturers (including the Pocket Coffee that powers my blogging). They have been using AWS since 2010, and use a template-driven model that lets them share features and functions across 250 web sites for 80 countries, giving them the ability to handle traffic surges while reducing costs by 30%.

Mediaset runs multiple broadcast networks and digital channels, as well as a pay-TV service, advertising agencies, and Italian film studio Medusa. The Mediaset Premium Online soccer service now attracts over 600,000 unique month visitors, doubling in size since it was launched last year. AWS allows them to meet this demand without adding more hardware, while also scaling up and down on an as-needed basis.

Eataly is the largest online marketplace for Italian food and wine products. After moving from physical stores to the web, they decided to use AWS to ensure scalability. Today, they use a wide range of AWS services, deliver 1.5 to 3 million page views daily, and handle holiday peaks ranging from 100 to 1000 orders per day.

Vodafone Italy has more than 30 million customers for their mobile services. They used AWS to power a new pay-as-you-go service to allow mobile customers to add credit to their accounts, building the service from scratch to be PCI DSS Level 1 compliant and to scale rapidly, all in just 3 months, and with a 30% reduction in capital expenses.

The European Space Agency (ESA) Centre for Earth Observation in Frascati, Italy runs the Data User Element (DUE) program. Although much of the work takes place in Earth-orbiting satellites, the program also takes advantage of EC2 and S3, storing up to 30 terabytes of images and observations at peak times and available to a 50,000 person user community.

The new region will give these customers (and many others) a new option with even lower latency for their local customers, and will also open the door to applications that must comply with strict data sovereignty requirements.

Investing in Italy’s Future
The upcoming Europe (Milan) Region is just one step along a long path! Back in 2012 we launched the first Point of Presence (PoP) in Milan and now use it to deliver Amazon CloudFront, Amazon Route 53, AWS Shield, and AWS WAF services to Italy, sharing the load with a PoP in Palermo that we launched in 2017. In 2016 we acquired Asti-based NICE Software (read Amazon Web Services to Acquire NICE).

We are also working to help prepare developers in Italy for the digital future, with programs like AWS Educate, AWS Academy, and AWS Activate. Dozens of universities and business schools across Italy are already participating in our educational programs, as are a plethora of startups and accelerators.

Stay Tuned
I’ll be sure to share additional news about this and other upcoming AWS regions as soon as I have it, so stay tuned!

Jeff;

 

Categories: Cloud

AWS GovCloud (US-East) Now Open

AWS Blog - Mon, 11/12/2018 - 17:08

Last year I told you that we were working on AWS GovCloud (US-East), an eastern US companion to the existing AWS GovCloud (US-West) Region that we launched in 2011. The new region is now open and ready to serve the needs of federal, state, and local government agencies, the IT contractors that serve them, and customers with regulated workloads. It offers added redundancy, data durability, and resiliency, and also provides additional options for disaster recovery. This is an isolated AWS region, subject to FedRAMP High and Moderate baselines, operated by US citizens on US soil. It is accessible only to vetted US entities and root account holders, who must confirm that they are US Persons (citizens or permanent residents) in order to gain access. You can read Achieve FedRAMP High Compliance in the AWS GovCloud (US) Region to learn more.

AWS GovCloud (US) gives vetted government customers and regulated industry customers and their partners the flexibility to architect secure cloud solutions that comply with: the FedRAMP High baseline, the DOJ’s Criminal Justice Information Systems (CJIS) Security Policy, U.S. International Traffic in Arms Regulations (ITAR), Export Administration Regulations (EAR), Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG) for Impact Levels 2, 4 and 5, FIPS 140-2, IRS-1075, and other compliance regimes.

Lots of Services
Applications running in this region can make use of Auto Scaling (EC2 and Application), AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, AWS Elastic Beanstalk, Amazon Elastic Block Store (EBS), Amazon ElastiCache, Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon ECS, Elastic Load Balancing (Application, Network, and Classic), Amazon EMR, Amazon Elasticsearch Service, Amazon Glacier, AWS Identity and Access Management (IAM) (including Access Key Last Used), Amazon Inspector, AWS Key Management Service (KMS), Amazon Kinesis Data Streams, AWS Lambda, Amazon Aurora (MySQL and PostgreSQL), Amazon Redshift, Amazon Relational Database Service (RDS), AWS Server Migration Service, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon EC2 Systems Manager (SSM), AWS Trusted Advisor, Amazon Virtual Private Cloud, VM Import, VPN, Amazon API Gateway, AWS Snowball, AWS Snowball Edge, AWS Server Migration Service, and AWS Step Functions.

Crossing the Regions
Many of the cool cross-regions features of AWS can be used to span AWS GovCloud (US-East) and AWS GovCloud (US-West) in order to reduce latency or to increase workload resiliency & availability for mission-critical systems. Here’s what you can do:

We are working to add support for DynamoDB Global Tables and Inter-Region VPC Peering.

AWS GovCloud (US) in Action
Our customers are already hosting many different types of applications in AWS GovCloud (US-West); here’s a small sample:

Enterprise Apps – Oracle, SAP, and Microsoft workloads that were traditionally provisioned for peak demand are now being run on scalable, cloud-based infrastructure.

HPC / Big Data – Organizations with large data sets are spinning up HPC clusters in the cloud in order to extract intelligence and to better serve their constituents.

Storage / DR – The ability to tap in to vast amounts of cost-effective, highly durable cloud storage managed by US Persons supports a variety of DR approaches, from simple backups to hot standby. The addition of a second region allows you to use of the cross-region features that I mentioned earlier.

Learn More
To learn more, check out the AWS GovCloud (US) page. If you are looking forward to making use of AWS GovCloud (US) and need a partner to help you to make it happen, take a look at the list of AWS GovCloud (US) Partners.

Jeff;

Categories: Cloud

New – Redis 5.0 Compatibility for Amazon ElastiCache

AWS Blog - Mon, 11/12/2018 - 14:28

Earlier this year we announced Redis 4.0 compatibility for Amazon ElastiCache. In that post, Randall explained how ElastiCache for Redis clusters can scale to terabytes of memory and millions of reads and writes per second! Other recent improvements to Amazon ElastiCache for Redis include:

Read Replica Scaling – Support for adding or removing read replica nodes to a Redis Cluster, along with a reduction of up to 40% in cluster creation time.

PCI DSS Compliance – Certification as Payment Card Industry Data Security Standard (PCI DSS) compliant. This allows you to use ElastiCache for Redis (engine versions 4.0.10 and higher) to build low-latency, high-throughput applications that process sensitive payment card data.

FedRAMP Authorized and Available in AWS GovCloud (US) – United States government customers and their partners can use ElastiCache for Redis to process and store their FedRAMP systems and data for mission-critical, high-impact workloads in the AWS GovCloud (US) Region, and at moderate impact level in the other AWS Regions in the US. To learn more, read the ElastiCache for Redis Compliance documentation.

In-Place Upgrades – Support for upgrading a Redis Cluster to a newer engine version in-place and while maintaining availability except for a failover period measured in seconds.

New Instance Types – Support for the use of M5 and R5 instances, with significant performance improvements.

5.0 Compatibility
Today I am happy to announce Redis 5.0 compatibility to Amazon ElastiCache for Redis. This version of Redis includes support for a new Streams data type and new commands (ZPOPMIN and ZPOPMAX) for use on Sorted Sets, and also does a better job of defragmenting memory. To learn more, read What’s New in Redis 5?

As usual, you can use the ElastiCache Console, CLI, APIs, or a CloudFormation template to get started. I’ll use the Console, with the following settings:

My cluster is up and running within minutes:

I can also use the in-place upgrade feature that I mentioned earlier on my existing 4.0-compatible cluster. I select the cluster, click Modify, and the 5.0-compatible engine is already selected. I confirm the other settings and click Modify to proceed:

Streams in Action
The new Stream data type is very powerful! Each Stream has a name, and can be created by simply referencing it as part of an XADD command. Let’s say that I have a long-running process that generates files that need to be scanned and validated. For testing purposes, I can add a bunch of files to a stream name Files from the shell like this:

$ find /usr -name 'a*' -exec redis-cli -h r5cluster.seutl3.ng.0001.use1.cache.amazonaws.com \ XADD Files \* f {} \;

I can retrieve values starting from the beginning of the stream using the command XREAD BLOCK 1000 STREAMS Files 0:

I can also read the values that are after a given ID:

In most cases, I would be doing the reads and the writes from code rather than from the command line, of course. This is a very simple example of the power of Redis 5 Streams and I am sure that you can do better!

Available Now
You can upgrade existing 4.0-compatible clusters and create new 5.0-compatible clusters today in all commercial AWS regions.

Jeff;

Categories: Cloud

AWS Quest 2 Update: The Road to re:Invent at the Midpoint

AWS Blog - Sat, 11/10/2018 - 07:04

Hey, AWSQuest News Blog readers – Greg Bilsland here. I work with Jeff behind the scenes on the AWS Blog, and I’ve been collaborating with the fine folks at Lone Shark Games on AWS Quest. You’ve done a great job tracking Ozz’s journey westward from Seattle to re:Invent. We wanted to give you a short midpoint update on where we’ve gone so far, and how far we have to go.

Our wayward robot pal started our journey in our fair Emerald City and got a cup of coffee (with an unexpected grid inside) before walking into the Pacific. We then traveled to Sydney, where they had to corral a runaway kangaroo. In Tokyo, we gobbled up some sushi (who knew robots were so hungry?), and then continued to Seoul for a moonlit walk along the Banpo Bridge.

In Beijing, we then encountered over a thousand terra cotta warriors. Then we journeyed to Singapore, where we were introduced to the national hybrid creature, the highly impressive merlion. Our next stop was Mumbai, a metropolis where the locals enjoy many different varieties of homemade spices. Our last stop before the midpoint of the journey was the city of Stockholm, at which an interesting set of teddy bears talked our favorite robot into doing some shopping.

The second half of Ozz’s trek first brought us to Cape Town, where we sampled some of the musical delights of the region. Today we have arrived in Paris, and undoubtedly a challenging puzzle awaits us here.

That’s my midpoint report on our little friend’s journey to re:Invent. Keep solving! Watch for the culmination on Friday, November 16, at 12 pm. I hope to meet up with you at re:Invent!

—Greg Bilsland

Categories: Cloud

PHP 7.1.24 Released

PHP News - Thu, 11/08/2018 - 07:28
Categories: PHP

PHP 7.2.12 Released

PHP News - Thu, 11/08/2018 - 02:28
Categories: PHP

PHP 7.3.0RC5 Released

PHP News - Thu, 11/08/2018 - 02:11
Categories: PHP

New Lower-Cost, AMD-Powered M5a and R5a EC2 Instances

AWS Blog - Tue, 11/06/2018 - 09:09

From the start, AWS has focused on choice and economy. Driven by a never-ending torrent of customer requests that power our well-known Virtuous Cycle, I think we have delivered on both over the years:

Choice – AWS gives you choices in a wide range of dimensions including locations (18 operational geographic regions, 4 more in the works, and 1 local region), compute models (instances, containers, and serverless), EC2 instance types, relational and NoSQL database choices, development languages, and pricing/purchase models.

Economy – We have reduced prices 67 times so far, and work non-stop to drive down costs and to make AWS an increasingly better value over time. We study usage patterns, identify areas for innovation and improvement, and deploy updates across the entire AWS Cloud on a very regular and frequent basis.

Today I would like to tell you about our latest development, one that provides you with a choice of EC2 instances that are more economical than ever!

Powered by AMD
The newest EC2 instances are powered by custom AMD EPYC processors running at 2.5 GHz and are priced 10% lower than comparable instances. They are designed to be used for workloads that don’t use all of compute power available to them, and provide you with a new opportunity to optimize your instance mix based on cost and performance.

Here’s what we are launching:

General Purpose – M5a instances are designed for general purpose workloads: web servers, app servers, dev/test environments, and gaming. The M5a instances are available in 6 sizes.

Memory Optimized – R5a instances are designed for memory-intensive workloads: data mining, in-memory analytics, caching, and so forth. The R5a instances are available in 6 sizes, with lower per-GiB memory pricing in comparison to the R5 instances.

The new instances are built on the AWS Nitro System. They can make use of existing HVM AMIs (as is the case with all other recent EC2 instance types, the AMI must include the ENA and NVMe drivers), and can be used in Cluster Placement Groups.

These new instances should be a great fit for customers who are looking to further cost-optimize their Amazon EC2 compute environment. As always, we recommend that you measure performance and cost on your own workloads when choosing your instance types.

General Purpose Instances
Here are the specs for the M5a instances:

Instance Name vCPUs RAM EBS-Optimized Bandwidth Network Bandwidth m5a.large
2 8 GiB Up to 2.120 Gbps Up to 10 Gbps m5a.xlarge
4 16 GiB Up to 2.120 Gbps Up to 10 Gbps m5a.2xlarge
8 32 GiB Up to 2.120 Gbps Up to 10 Gbps m5a.4xlarge
16 64 GiB 2.120 Gbps Up to 10 Gbps m5a.12xlarge
48 192 GiB 5 Gbps 10 Gbps m5a.24xlarge
96 384 GiB 10 Gbps 20 Gbps

Memory Optimized Instances
Here are the specs for the R5a instances:

Instance Name vCPUs RAM EBS-Optimized Bandwidth Network Bandwidth r5a.large
2 16 GiB Up to 2.120 Gbps Up to 10 Gbps r5a.xlarge
4 32 GiB Up to 2.120 Gbps Up to 10 Gbps r5a.2xlarge
8 64 GiB Up to 2.120 Gbps Up to 10 Gbps r5a.4xlarge
16 128 GiB 2.120 Gbps Up to 10 Gbps r5a.12xlarge
48 384 GiB 5 Gbps 10 Gbps r5a.24xlarge
96 768 GiB 10 Gbps 20 Gbps

Available Now
These instances are available now and you can start using them today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore) Regions in On-Demand, Spot, and Reserved Instance form. Pricing, as I noted earlier, is 10% lower than the equivalent existing instances. To learn more, visit our new AMD Instances page.

Jeff;

PS – We are also working on T3a instances; stay tuned for more info!

 

Categories: Cloud

Learn about AWS – November AWS Online Tech Talks

AWS Blog - Mon, 11/05/2018 - 12:03

AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. Join us this month to learn about AWS services and solutions. We’ll have experts online to help answer any questions you may have.

Featured this month! Check out the tech talks: Virtual Hands-On Workshop: Amazon Elasticsearch Service – Analyze Your CloudTrail Logs, AWS re:Invent: Know Before You Go and AWS Office Hours: Amazon GuardDuty Tips and Tricks.

Register today!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

AR/VR

November 13, 2018 | 11:00 AM – 12:00 PM PTHow to Create a Chatbot Using Amazon Sumerian and Sumerian Hosts – Learn how to quickly and easily create a chatbot using Amazon Sumerian & Sumerian Hosts.

Compute

November 19, 2018 | 11:00 AM – 12:00 PM PTUsing Amazon Lightsail to Create a Database – Learn how to set up a database on your Amazon Lightsail instance for your applications or stand-alone websites.

November 21, 2018 | 09:00 AM – 10:00 AM PTSave up to 90% on CI/CD Workloads with Amazon EC2 Spot Instances – Learn how to automatically scale a fleet of Spot Instances with Jenkins and EC2 Spot Plug-In.

Containers

November 13, 2018 | 09:00 AM – 10:00 AM PTCustomer Showcase: How Portal Finance Scaled Their Containerized Application Seamlessly with AWS Fargate – Learn how to scale your containerized applications without managing servers and cluster, using AWS Fargate.

November 14, 2018 | 11:00 AM – 12:00 PM PTCustomer Showcase: How 99designs Used AWS Fargate and Datadog to Manage their Containerized Application – Learn how 99designs scales their containerized applications using AWS Fargate.

November 21, 2018 | 11:00 AM – 12:00 PM PTMonitor the World: Meaningful Metrics for Containerized Apps and Clusters – Learn about metrics and tools you need to monitor your Kubernetes applications on AWS.

Data Lakes & Analytics

November 12, 2018 | 01:00 PM – 01:45 PM PTSearch Your DynamoDB Data with Amazon Elasticsearch Service – Learn the joint power of Amazon Elasticsearch Service and DynamoDB and how to set up your DynamoDB tables and streams to replicate your data to Amazon Elasticsearch Service.

November 13, 2018 | 01:00 PM – 01:45 PM PTVirtual Hands-On Workshop: Amazon Elasticsearch Service – Analyze Your CloudTrail Logs – Get hands-on experience and learn how to ingest and analyze CloudTrail logs using Amazon Elasticsearch Service.

November 14, 2018 | 01:00 PM – 01:45 PM PTBest Practices for Migrating Big Data Workloads to AWS – Learn how to migrate analytics, data processing (ETL), and data science workloads running on Apache Hadoop, Spark, and data warehouse appliances from on-premises deployments to AWS.

November 15, 2018 | 11:00 AM – 11:45 AM PTBest Practices for Scaling Amazon Redshift – Learn about the most common scalability pain points with analytics platforms and see how Amazon Redshift can quickly scale to fulfill growing analytical needs and data volume.

Databases

November 12, 2018 | 11:00 AM – 11:45 AM PTModernize your SQL Server 2008/R2 Databases with AWS Database Services – As end of extended Support for SQL Server 2008/ R2 nears, learn how AWS’s portfolio of fully managed, cost effective databases, and easy-to-use migration tools can help.

DevOps

November 16, 2018 | 09:00 AM – 09:45 AM PTBuild and Orchestrate Serverless Applications on AWS with PowerShell – Learn how to build and orchestrate serverless applications on AWS with AWS Lambda and PowerShell.

End-User Computing

November 19, 2018 | 01:00 PM – 02:00 PM PTWork Without Workstations with AppStream 2.0 – Learn how to work without workstations and accelerate your engineering workflows using AppStream 2.0.

Enterprise & Hybrid

November 19, 2018 | 09:00 AM – 10:00 AM PTEnterprise DevOps: New Patterns of Efficiency – Learn how to implement “Enterprise DevOps” in your organization through building a culture of inclusion, common sense, and continuous improvement.

November 20, 2018 | 11:00 AM – 11:45 AM PTAre Your Workloads Well-Architected? – Learn how to measure and improve your workloads with AWS Well-Architected best practices.

IoT

November 16, 2018 | 01:00 PM – 02:00 PM PTPushing Intelligence to the Edge in Industrial Applications – Learn how GE uses AWS IoT for industrial use cases, including 3D printing and aviation.

Machine Learning

November 12, 2018 | 09:00 AM – 09:45 AM PTAutomate for Efficiency with Amazon Transcribe and Amazon Translate – Learn how you can increase efficiency and reach of your operations with Amazon Translate and Amazon Transcribe.

Mobile

November 20, 2018 | 01:00 PM – 02:00 PM PTGraphQL Deep Dive – Designing Schemas and Automating Deployment – Get an overview of the basics of how GraphQL works and dive into different schema designs, best practices, and considerations for providing data to your applications in production.

re:Invent

November 9, 2018 | 08:00 AM – 08:30 AM PTEpisode 7: Getting Around the re:Invent Campus – Learn how to efficiently get around the re:Invent campus using our new mobile app technology. Make sure you arrive on time and never miss a session.

November 14, 2018 | 08:00 AM – 08:30 AM PTEpisode 8: Know Before You Go – Learn about all final details you need to know before you arrive in Las Vegas for AWS re:Invent!

Security, Identity & Compliance

November 16, 2018 | 11:00 AM – 12:00 PM PTAWS Office Hours: Amazon GuardDuty Tips and Tricks – Join us for office hours and get the latest tips and tricks for Amazon GuardDuty from AWS Security experts.

Serverless

November 14, 2018 | 09:00 AM – 10:00 AM PTServerless Workflows for the Enterprise – Learn how to seamlessly build and deploy serverless applications across multiple teams in large organizations.

Storage

November 15, 2018 | 01:00 PM – 01:45 PM PTMove From Tape Backups to AWS in 30 Minutes – Learn how to switch to cloud backups easily with AWS Storage Gateway.

November 20, 2018 | 09:00 AM – 10:00 AM PTDeep Dive on Amazon S3 Security and Management – Amazon S3 provides some of the most enhanced data security features available in the cloud today, including access controls, encryption, security monitoring, remediation, and security standards and compliance certifications.

Categories: Cloud

Join me for the Camp re:Invent Trivia Challenge

AWS Blog - Mon, 11/05/2018 - 07:47

With less than 3 weeks to go until AWS re:Invent 2018, my colleagues and I are working harder than ever to produce the best educational event on the planet! With multiple keynotes, well over two thousand sessions, bootcamps, chalk talks, hands-on workshops, labs, and hackathons to choose from I am confident that you will leave Las Vegas better informed than when you arrived.

Challenge Me
Today I would like to tell you about an opportunity to put your AWS knowledge to use in a new way. Sign up now and join me for the Camp re:Invent Trivia Challenge (7:00 PM on November 28th in the Venetian Theatre). You will have the opportunity to compete against me by answering questions about AWS, have a lot of fun, and to pick up some of the limited edition Camp re:Invent and Jeff Barr pins. I have no idea what to study or how to prepare, so things could get very interesting really fast.

Come for the Challenge, Stay for the Goodies
By the way, in addition to over 60 AWS pins that you can earn by participating in various events and attending certain sessions, you will be able to get them from our partners and sponsors. You can also trade pins with other re:Invent attendees. Here are just a few of the pins (via the unofficial @reinventParties list) that you can earn, find, or trade:

I will also bring along some of my cute new stickers:

See you in Vegas
I am looking forward to meeting my fans and friends in Las Vegas. I have plenty on my agenda for the week, but I always have time to stop and say hello, so don’t be shy!

Jeff;

Categories: Cloud

AWS Quest 2 – The Road to re:Invent

AWS Blog - Thu, 11/01/2018 - 12:52

The first AWS Quest started in May of this year. As you may recall, my trusty robot companion went to pieces after burying some clues in this blog, the AWS Podcast, and other parts of the AWS site. Thanks to the tireless efforts of devoted puzzle solvers all over the world, all of the puzzles were found, all but one was solved, and we put Ozz back together in an action-packed broadcast on the AWS Twitch channel.

We had so much fun the first time around that we have decided to do it again! Ozz 2.0 is lighter, stronger, faster, cuter, and more mobile than ever. Just like last time, we’ve worked with our friends at Lone Shark Games to design a set of puzzles that will require multiple leaps of logic, group cooperation, and an indefatigable spirit to solve.

Follow The Orange Brick Road
I told Ozz to meet me in Las Vegas for AWS re:Invent, but I didn’t specify the route. Ozz, being adventurous and somewhat devious, decided to follow an orange brick road that heads west from Seattle. From what I can tell, Ozz plans to stop in 15 cities along the way and is looking for souvenirs to bring along to re:Invent.

Ozz will leave Seattle on November 1st after picking up a souvenir from Amazon’s home city. From there, Ozz is off to Sydney, Australia. Each puzzle will launch at aws.amazon.com/awsquest at noon in Ozz’s timezone.

Your job, should you decide to accept it, is to help find and decode the puzzles, and to help Ozz to decide what to bring to re:Invent.

Jeff;

PS – Ozz is looking for some friendly robotic faces along the way. From November 1 to 16, follow @awscloud on Twitter and share a picture of a robot around your city for chance to get on the phone with me to chat about AWS and the cloud. We’ll also be looking for robots on Instagram, so follow @amazonwebservices there and share your robot pictures for everyone to enjoy. We will DM the winner by December 5, 2018 to coordinate the call. The post must content #AWSQuest #Promotion and your profile must be public to be eligible.

Categories: Cloud

Pages

Subscribe to LAMP, Database and Cloud Technical Information aggregator


Main menu 2

by Dr. Radut