Jump to Navigation

Cloud

New – Managed Databases for Amazon Lightsail

AWS Blog - Tue, 10/16/2018 - 11:58

Amazon Lightsail makes it easy for you to get started with AWS. You choose the operating system (and optional application) that you want to run, pick an instance plan, and create an instance, all in a matter of minutes. Lightsail offers low, predictable pricing, with instance plans that include compute power, storage, and data transfer:

Managed Databases
Today we are making Lightsail even more useful by giving you the ability to create a managed database with a couple of clicks. This has been one of our top customer requests and I am happy to be able to share this news.

This feature is going to be of interest to a very wide range of current and future Lightsail users, including students, independent developers, entrepreneurs, and IT managers. We’ve addressed the most common and complex issues that arise when setting up and running a database. As you will soon see, we have simplified and fine-tuned the process of choosing, launching, securing, accessing, monitoring, and maintaining a database!

Each Lightsail database bundle has a fixed, monthly price that includes the database instance, a generous amount of SSD-backed storage, a terabyte or more of data transfer to the Internet and other AWS regions, and automatic backups that give you point-in-time recovery for a 7-day period. You can also create manual database snapshots that are billed separately.

Creating a Managed Database
Let’s walk through the process of creating a managed database and loading an existing MySQL backup into it. I log in to the Lightsail Console and click Databases to get started. Then I click Create database to move forward:

I can see and edit all of the options at a glance. I choose a location, a database engine and version, and a plan, enter a name, and click Create database (all of these options have good defaults; a single click often suffices):

We are launching with support for MySQL 5.6 and 5.7, and will add support for PostgreSQL 9.6 and 10 very soon. The Standard database plan creates a database in one Availability Zone with no redundancy; the High Availability plan also creates a presence in a second AZ, and is recommended for production use.

Database creation takes just a few minutes, the status turns to Available, and my database is ready to use:

I click on Database-Oregon-1, and I can see the connection details, and have access to other management information & tools:

I’m ready to connect! I create an SSH connection to my Lightsail instance, ensure that the mysql package is installed, and connect using the information above (read Connecting to Your MySQL Database to learn more):

Now I want to import some existing data into my database. Lightsail lets me enable Data import mode in order to defer any backup or maintenance operations:

Enabling data import mode deletes any existing automatic snapshots; you may want to take a manual snapshot before starting your import if you are importing fresh data into an existing database.

I have a large (13 GB) , ancient (2013-era) MySQL backup from a long-dead personal project; I download it from S3, uncompress it, and import it:

I can watch the metrics while the import is underway:

After the import is complete I disable data import mode, and I can run queries against my tables:

To learn more, read Importing Data into Your Database.

Lightsail manages all routine database operations. If I make a mistake and mess up my data, I can use the Emergency Restore to create a fresh database instance from an earlier point in time:

I can rewind by up to 7 days, limited to when I last disabled data import mode.

I can also take snapshots, and use them later to create a fresh database instance:

Things to Know
Here are a couple of things to keep in mind when you use this new feature:

Engine Versions – We plan to support the two latest versions of MySQL, and will do the same for other database engines as we make them available.

High Availability – As is always the case for production AWS systems, you should use the High Availability option in order to maintain a database footprint that spans two Zones. You can switch between Standard and High Availability using snapshots.

Scaling Storage – You can scale to a larger database instance by creating and then restoring a snapshot.

Data Transfer – Data transfer to and from Lightsail instances in the same AWS Region does not count against the usage that is included in your plan.

Amazon RDS – This feature shares core technology with Amazon RDS, and benefits from our operational experience with that family of services.

Available Now
Managed databases are available today in all AWS Regions where Lightsail is available:

Jeff;

Categories: Cloud

re:Invent 2018 – 55 Days to Go….

AWS Blog - Tue, 10/02/2018 - 05:50

As I write this, there are just 55 calendar days until AWS re:Invent 2018. My colleagues and I are working flat-out to bring you the best possible learning experience and I want to give you a quick update on a couple of things…

Transportation – Customer Obsession is the first Amazon Leadership Principle and we take your feedback seriously! The re:Invent 2018 campus is even bigger this year, and our transportation system has been tuned and scaled to match. This includes direct shuttle routes from venue to venue so that you don’t spend time waiting at other venues, access to real-time transportation info from within the re:Invent app, and on-site signage. The mobile app will even help you to navigate to your sessions while letting you know if you are on time. If you are feeling more independent and don’t want to ride the shuttles, we’ll have partnerships with ridesharing companies including Lyft and Uber. Visit the re:Invent Transportation page to learn more about our transportation plans, routes, and options.

Reserved Seating – In order to give you as many opportunities to see the technical content that matters the most to you, we are bringing back reserved seating. You will be able to make reservations starting at 10 AM PT on Thursday, October 11, so mark your calendars. Reserving a seat is the best way to ensure that you will get a seat in your favorite session without waiting in a long line, so be sure to arrive at least 10 minutes before the scheduled start. As I have mentioned before, we have already scheduled repeats of the most popular sessions, and made them available for reservation in the Session Catalog. Repeats will take place all week in all re:Invent venues, along with overflow sessions in our Content Hubs (centralized overflow rooms in every venue). We will also stream live content to the Content Hubs as the sessions fill up.

Trivia Night – Please join me at 7:30 PM on Wednesday in the Venetian Theatre for the first-ever Camp re:Invent Trivia Night. Come and test your re:Invent and AWS knowledge to see if you and your team can beat me at trivia (that should not be too difficult). The last person standing gets bragging rights and an awesome prize.

How to re:Invent – Whether you are a first-time attendee or a veteran re:Invent attendee, please take the time to watch our How to re:Invent videos. We want to make sure that you arrive fully prepared, ready to learn about the latest and greatest AWS services, meet your peers and members of the AWS teams, and to walk away with the knowledge and the skills that will help you to succeed in your career.

See you in Vegas!

Jeff;

Categories: Cloud

Learn about AWS – October AWS Online Tech Talks

AWS Blog - Mon, 10/01/2018 - 09:17

AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. Join us this month to learn about AWS services and solutions. We’ll have experts online to help answer any questions you may have.

Featured this month, check out the webinars under AR/VR, End-User Computing, Industry Solutions. Also, register for our second fireside chat discussion on Amazon Redshift.

Register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

AR/VR

October 16, 2018 | 01:00 PM – 02:00 PM PTCreating and Publishing AR, VR and 3D Applications with Amazon Sumerian – Learn about Amazon Sumerian, the fastest and easiest way to create and publish immersive applications.

Compute

October 25, 2018 | 09:00 AM – 10:00 AM PTRunning Cost Effective Batch Workloads with AWS Batch and Amazon EC2 Spot Instances – Learn how to run complex workloads, such as analytics, image processing, and machine learning applications efficiently and cost-effectively.

Data Lakes & Analytics

October 18, 2018 | 01:00 PM – 01:45 PM PTFireside Chat: The Evolution of Amazon Redshift – Join Vidhya Srinivasan, General Manager of Redshift, in a candid conversation as she discusses the product’s evolution recently shipped features and improvements.

October 15, 2018 | 01:00 PM – 01:45 PM PTCustomer Showcase: The Secret Sauce Behind GroupM’s Marketing Analytics Platform – Learn how GroupM – the world’s largest media investment group with more than $113.8bn in billings – created a modern data analytics platform using Amazon Redshift and Matillion.

Databases

October 15, 2018 | 11:00 AM – 12:00 PM PTSupercharge Query Caching with AWS Database Services – Learn how AWS database services, including Amazon Relational Database Service (RDS) and Amazon ElastiCache, work together to make it simpler to add a caching layer to your database, delivering high availability and performance for query-intensive apps.

October 17, 2018 | 09:00 AM – 09:45 AM PTHow to Migrate from Cassandra to DynamoDB Using the New Cassandra Connector in the AWS Database Migration Service – Learn how to migrate from Cassandra to DynamoDB using the new Cassandra Connector in the AWS Database Migration Service.

End-User Computing

October 23, 2018 | 01:00 PM – 02:00 PM PTHow to use Amazon Linux WorkSpaces for Agile Development – Learn how to integrate your Amazon Linux WorkSpaces development environment with other AWS Developer Tools.

Enterprise & Hybrid

October 23, 2018 | 09:00 AM – 10:00 AM PTMigrating Microsoft SQL Server 2008 Databases to AWS – Learn how you can provision, monitor, and manage Microsoft SQL Server on AWS.

Industry Solutions

October 24, 2018 | 11:00 AM – 12:00 PM PTTape-to-Cloud Media Migration Walkthrough – Learn from media-specialist SAs as they walk through a content migration solution featuring machine learning and media services to automate processing, packaging, and metadata extraction.

IoT

October 22, 2018 | 01:00 PM – 01:45 PM PTUsing Asset Monitoring in Industrial IoT Applications – Learn how AWS IoT is used in industrial applications to understand asset health and performance.

Machine Learning

October 15, 2018 | 09:00 AM – 09:45 AM PTBuild Intelligent Applications with Machine Learning on AWS – Learn how to accelerate development of AI applications using machine learning on AWS.

Management Tools

October 24, 2018 | 01:00 PM – 02:00 PM PTImplementing Governance and Compliance in a Multi-Account, Multi-Region Scenario – Learn AWS Config best practices on how to implement governance and compliance in a multi-account, multi-Region scenario.

Networking

October 23, 2018 | 11:00 AM – 11:45 AM PTHow to Build Intelligent Web Applications @ Edge – Explore how Lambda@Edge can help you deliver low latency web applications.

October 25, 2018 | 01:00 PM – 02:00 PM PTDeep Dive on Bring Your Own IP – Learn how to easily migrate legacy applications that use IP addresses with Bring Your Own IP.

re:Invent

October 10, 2018 | 08:00 AM – 08:30 AM PTEpisode 6: Mobile App & Reserved Seating – Discover new innovations coming to the re:Invent 2018 mobile experience this year. Plus, learn all about reserved seating for your priority sessions.

Security, Identity & Compliance

October 22, 2018 | 11:00 AM – 11:45 AM PTGetting to Know AWS Secrets Manager – Learn how to protect your secrets used to access your applications, services, and IT resources.

Serverless

October 17, 2018 | 11:00 AM – 12:00 PM PTBuild Enterprise-Grade Serverless Apps – Learn how developers can design, develop, deliver, and monitor cloud applications as they take advantage of the AWS serverless platform and developer toolset.

Storage

October 24, 2018 | 09:00 AM – 09:45 AM PTDeep Dive: New AWS Storage Gateway Hardware Appliance – Learn how you can use the AWS Storage Gateway hardware appliance to connect on-premises applications to AWS storage.

Categories: Cloud

Saving Koalas Using Genomics Research and Cloud Computing

AWS Blog - Fri, 09/28/2018 - 00:02

Today is Save the Koala Day and a perfect time to to tell you about some noteworthy and ground-breaking research that was made possible by AWS Research Credits and the AWS Cloud.

Five years ago, a research team led by Dr. Rebecca Johnson (Director of the Australian Museum Research Institute) set out to learn more about koala populations, genetics, and diseases. As a biologically unique animal with a limited appetite, maintaining a healthy and genetically diverse population are both key elements of any conservation plan. In addition to characterizing the genetic diversity of koala populations, the team wanted to strengthen Australia’s ability to lead large-scale genome sequencing projects.

Inside the Koala Genome
Last month the team published their results in Nature Genetics. Their paper (Adaptation and Conservation Insights from the Koala Genome) identifies the genomic basis for the koala’s unique biology. Even though I had to look up dozens of concepts as I read the paper, I was able to come away with a decent understanding of what they found. Here’s my lay summary:

Toxic Diet – The eucalyptus leaves favored by koalas contain a myriad of substances that are toxic to other species if ingested. Gene expansions and selection events in genes encoding enzymes with detoxification functions enable koalas to rapidly detoxify these substances, making them able to subsist on a diet favored by no other animal. The genetic repertoire underlying the accelerated metabolics also renders common anti-inflammatory medications and antibiotics ineffective for treating ailing koalas.

Food Choice – Koalas are, as I noted earlier, very picky eaters. Genetically speaking, this comes about because their senses of smell and taste are enhanced, with 6 genes giving them the ability to discriminate between plant metabolites on the basis of smell. The researchers also found that koalas have a gene that helps them to select eucalyptus leaves with a high water content, and another that enhances their ability to perceive bitter and umami flavors.

Reproduction – Specific genes which control ovulation and birth were identified. In the interest of frugality, female koalas produce eggs only when needed.

Koala Milk – Newborn koalas are the size of a kidney bean and weigh less than half of a gram! They nurse for about a year, taking milk that changes in composition over time, with a potential genetic correlation. The researchers also identified genes known to function as anti-microbial properties.

Immune Systems – The researchers identified genes that formed the basis for resistance, immunity, or susceptibility to certain diseases that affect koalas. They also found evidence of a “genomic invasion” (their words) where the koala retrovirus actually inserts itself into the genome.

Genetic Diversity – The researchers also examined how geological events like habitat barriers and surface temperatures have shaped genetic diversity and population evolution. They found that koalas from some areas had markedly less genetic diversity than those from others, with evidence that allowed them to correlate diversity (or the lack of it) with natural barriers such as the Hunter Valley.

Powered by AWS
Creating a complete gene sequence requires (among many other things) an incredible amount of compute power and vast amount of storage.

While I don’t fully understand the process, I do know that it works on a bottom-up basis. The DNA samples are broken up into manageable pieces, each one containing several tens of thousands of base pairs. A variety of chemicals are applied to cause the different base constituents (A, T, C, or G) to fluoresce, and the resulting emission is captured, measured, and stored. Since this study generated a koala reference genome, the sequencing reads were assembled using an overlapping layout consensus assembly algorithm known as Falcon which was run on AWS. The koala genome comes in at 3.42 billion base pairs, slightly larger than the human genome.

I’m happy to report that this groundbreaking work was performed on AWS. The research team used cfnCluster to create multiple clusters, each with 500 to 1000 vCPUs, and running Falcon from Pacific Biosciences. All in all, the team used 3 million EC2 core hours, most of which were EC2 Spot Instances. Having access to flexible, low-cost compute power allowed the bioinformatics team to experiment with the configuration of the Falcon pipeline as they tuned and adapted it to their workload.

We are happy to have done our small part to help with this interesting and valuable research!

Jeff;

Categories: Cloud

Now Available – Amazon EC2 High Memory Instances with 6, 9, and 12 TB of Memory, Perfect for SAP HANA

AWS Blog - Thu, 09/27/2018 - 14:19

The Altair 8800 computer that I built in 1977 had just 4 kilobytes of memory. Today I was able to use an EC2 instance with 12 terabytes (12 tebibytes to be exact) of memory, almost 4 billion times as much!

The new Amazon EC2 High Memory Instances let you take advantage of other AWS services including Amazon Elastic Block Store (EBS), Amazon Simple Storage Service (S3), AWS Identity and Access Management (IAM), Amazon CloudWatch, and AWS Config. They are designed to allow AWS customers to run large-scale SAP HANA installations, and can be used to build production systems that provide enterprise-grade data protection and business continuity.

Here are the specs:

Instance Name Memory Logical Processors
Dedicated EBS Bandwidth Network Bandwidth u-6tb1.metal 6 TiB 448 14 Gbps 25 Gbps u-9tb1.metal 9 TiB 448 14 Gbps 25 Gbps u-12tb1.metal 12 TiB 448 14 Gbps 25 Gbps

Each Logical Processor is a hyperthread on one of the 224 physical CPU cores. All three sizes are powered by the latest generation Intel® Xeon® Platinum 8176M (Skylake) processors running at 2.1 GHz (with Turbo Boost to 3.80 GHz), and are available as EC2 Dedicated Hosts for launch within a new or existing Amazon Virtual Private Cloud (VPC). You can launch them using the AWS Command Line Interface (CLI) or the EC2 API, and manage them there or in the EC2 Console.

The instances are EBS-Optimized by default, and give you low-latency access to encrypted and unencrypted EBS volumes. You can choose between Provisioned IOPS, General Purpose (SSD), and Streaming Magnetic volumes, and can attach multiple volumes, each with a distinct type and size, to each instance.

SAP HANA in Minutes
The EC2 High Memory instances are certified by SAP for OLTP and OLAP workloads such as S4/HANA, Suite on HANA, BW4/HANA, BW on HANA, and Datamart (see the SAP HANA Hardware Directory for more information).

We ran the SAP Standard Application Benchmark and measured the instances at 480,600 SAPS, making them suitable for very large workloads. Here’s an excerpt from the benchmark:

In anticipation of today’s launch, the EC2 team provisioned a u-12tb1.metal instance for my AWS account and I located it in the Dedicated Hosts section of the EC2 Console:

Following the directions in the SAP HANA on AWS Quick Start, I copy the Host Reservation ID, hop over to the CloudFormation Console and click Create Stack to get started. I choose my template, give my stack a name, and enter all of the necessary parameters, including the ID that I copied, and click Next to proceed:

On the next page I indicate that I want to tag my resources, leave everything else as-is, and click Next:

I review my settings, acknowledge that the stack might create IAM resources, and click Next to create the stack:

The AWS resources are created and SAP HANA is installed, all in less than 40 minutes:

Using an EC2 instance on the public subnet of my VPC, I can access the new instance. Here’s the memory:

And here’s the CPU info:

I can also run an hdbsql query:

SELECT DISTINCT HOST, CAST(VALUE/1024/1024/1024 AS INTEGER) AS TOTAL_MEMORY_GB FROM SYS.M_MEMORY WHERE NAME='SYSTEM_MEMORY_SIZE';

Here’s the output, showing that SAP HANA has access to 12 TiB of memory:

Another option is to have the template create a second EC2 instance, this one running Windows on a public subnet, and accessible via RDP:

I could install HANA Studio on this instance and use its visual interface to run my SAP HANA queries.

The Quick Start implementation uses high performance SSD-based EBS storage volumes for all of your data. This gives you the power to switch to a larger instance in minutes without having to migrate any data.

Available Now
Just like the existing SAP-certified X1 and X1e instances, the EC2 High Memory instances are very cost-effective. For example, the effective hourly rate for the All Upfront 3-Year Reservation for a u-12tb1.metal Dedicated Host in the US East (N. Virginia) Region is $30.539 per hour.

These instances are now available in the US East (N. Virginia) and Asia Pacific (Tokyo) Regions as Dedicated Hosts with a 3-year term, and will be available soon in the US West (Oregon), Europe (Ireland), and AWS GovCloud (US) Regions. If you are ready to get started, contact your AWS account team or use the Contact Us page to make a request.

In the Works
We’re not stopping at 12 TiB, and are planning to launch instances with 18 TiB and 24 TiB of memory in 2019.

Jeff;

PS – If you have applications that might need multiple terabytes in the future but can run comfortably in less memory today, be sure to consider the R5, X1, and X1e instances.

 

Categories: Cloud

Meet the Newest AWS Heroes (September 2018 Edition)

AWS Blog - Fri, 09/21/2018 - 08:05

AWS Heroes are passionate AWS enthusiasts who use their extensive knowledge to teach others about all things AWS across a range of mediums. Many Heroes eagerly share knowledge online via forums, social media, or blogs; while others lead AWS User Groups or organize AWS Community Day events. Their extensive efforts to spread AWS knowledge have a significant impact within their local communities. Today we are excited to introduce the newest AWS Heroes:

Jaroslaw Zielinski – Poznan, Poland

AWS Community Hero Jaroslaw Zielinski is a Solutions Architect at Vernity in Poznan (Poland), where his responsibility is to support customers on their road to the cloud using cloud adoption patterns. Jaroslaw is a leader of AWS User Group Poland operating in 7 different cities around Poland. Additionally, he connects the community with the biggest IT conferences in the region – PLNOG, DevOpsDay, Amazon@Innovation to name just a few.

He supports numerous projects connected with evangelism, like Zombie Apocalypse Workshops or Cloud Builder’s Day. Bringing together various IT communities, he hosts a conference Cloud & Datacenter Day – the biggest community conference in Poland. In addition, his passion for IT is transferred into his own blog called Popołudnie w Sieci. He also publishes in various professional papers.

 

Jerry Hargrove – Kalama, USA

AWS Community Hero Jerry Hargrove is a cloud architect, developer and evangelist who guides companies on their journey to the cloud, helping them to build smart, secure and scalable applications. Currently with Lucidchart, a leading visual productivity platform, Jerry is a thought leader in the cloud industry and specializes in AWS product and services breakdowns, visualizations and implementation. He brings with him over 20 years of experience as a developer, architect & manager for companies like Rackspace, AWS and Intel.

You can find Jerry on Twitter compiling his famous sketch notes and creating Lucidchart templates that pinpoint practical tips for working in the cloud and helping developers increase efficiency. Jerry is the founder of the AWS Meetup Group in Salt Lake City, often contributes to meetups in the Pacific Northwest and San Francisco Bay area, and speaks at developer conferences worldwide. Jerry holds several professional AWS certifications.

 

Martin Buberl – Copenhagen, Denmark

AWS Community Hero Martin Buberl brings the New York hustle to Scandinavia. As VP Engineering at Trustpilot he is on a mission to build the best engineering teams in the Nordics and Baltics. With a person-centered approach, his focus is on high-leverage activities to maximize impact, customer value and iteration speed — and utilizing cloud technologies checks all those boxes.

His cloud-obsession made him an early adopter and evangelist of all types of AWS services throughout his career. Nowadays, he is especially passionate about Serverless, Big Data and Machine Learning and excited to leverage the cloud to transform those areas.

Martin is an AWS User Group Leader, organizer of the AWS Community Day Nordics and founder of the AWS Community Nordics Slack. He has spoken at multiple international AWS events — AWS User Groups, AWS Community Days and AWS Global Summits — and is looking forward to continue sharing his passion for software engineering and cloud technologies with the Community.

To learn more about the AWS Heroes program or to connect with an AWS Hero in your community, click here.

Categories: Cloud

New – Parallel Query for Amazon Aurora

AWS Blog - Thu, 09/20/2018 - 14:54

Amazon Aurora is a relational database that was designed to take full advantage of the abundance of networking, processing, and storage resources available in the cloud. While maintaining compatibility with MySQL and PostgreSQL on the user-visible side, Aurora makes use of a modern, purpose-built distributed storage system under the covers. Your data is striped across hundreds of storage nodes distributed over three distinct AWS Availability Zones, with two copies per zone, on fast SSD storage. Here’s what this looks like (extracted from Getting Started with Amazon Aurora):

New Parallel Query
When we launched Aurora we also hinted at our plans to apply the same scale-out design principle to other layers of the database stack. Today I would like to tell you about our next step along that path.

Each node in the storage layer pictured above also includes plenty of processing power. Aurora is now able to make great use of that processing power by taking your analytical queries (generally those that process all or a large part of a good-sized table) and running them in parallel across hundreds or thousands of storage nodes, with speed benefits approaching two orders of magnitude. Because this new model reduces network, CPU, and buffer pool contention, you can run a mix of analytical and transactional queries simultaneously on the same table while maintaining high throughput for both types of queries.

The instance class determines the number of parallel queries that can be active at a given time:

  • db.r*.large – 1 concurrent parallel query session
  • db.r*.xlarge – 2 concurrent parallel query sessions
  • db.r*.2xlarge – 4 concurrent parallel query sessions
  • db.r*.4xlarge – 8 concurrent parallel query sessions
  • db.r*.8xlarge – 16 concurrent parallel query sessions
  • db.r4.16xlarge – 16 concurrent parallel query sessions

You can use the aurora_pq parameter to enable and disable the use of parallel queries at the global and the session level.

Parallel queries enhance the performance of over 200 types of single-table predicates and hash joins. The Aurora query optimizer will automatically decide whether to use Parallel Query based on the size of the table and the amount of table data that is already in memory; you can also use the aurora_pq_force session variable to override the optimizer for testing purposes.

Parallel Query in Action
You will need to create a fresh cluster in order to make use of the Parallel Query feature. You can create one from scratch, or you can restore a snapshot.

To create a cluster that supports Parallel Query, I simply choose Provisioned with Aurora parallel query enabled as the Capacity type:

I used the CLI to restore a 100 GB snapshot for testing, and then explored one of the queries from the TPC-H benchmark. Here’s the basic query:

SELECT l_orderkey, SUM(l_extendedprice * (1-l_discount)) AS revenue, o_orderdate, o_shippriority FROM customer, orders, lineitem WHERE c_mktsegment='AUTOMOBILE' AND c_custkey = o_custkey AND l_orderkey = o_orderkey AND o_orderdate < date '1995-03-13' AND l_shipdate > date '1995-03-13' GROUP BY l_orderkey, o_orderdate, o_shippriority ORDER BY revenue DESC, o_orderdate LIMIT 15;

The EXPLAIN command shows the query plan, including the use of Parallel Query:

+----+-------------+----------+------+-------------------------------+------+---------+------+-----------+--------------------------------------------------------------------------------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------+------+-------------------------------+------+---------+------+-----------+--------------------------------------------------------------------------------------------------------------------------------+ | 1 | SIMPLE | customer | ALL | PRIMARY | NULL | NULL | NULL | 14354602 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | orders | ALL | PRIMARY,o_custkey,o_orderdate | NULL | NULL | NULL | 154545408 | Using where; Using join buffer (Hash Join Outer table orders); Using parallel query (4 columns, 1 filters, 1 exprs; 0 extra) | | 1 | SIMPLE | lineitem | ALL | PRIMARY,l_shipdate | NULL | NULL | NULL | 606119300 | Using where; Using join buffer (Hash Join Outer table lineitem); Using parallel query (4 columns, 1 filters, 1 exprs; 0 extra) | +----+-------------+----------+------+-------------------------------+------+---------+------+-----------+--------------------------------------------------------------------------------------------------------------------------------+ 3 rows in set (0.01 sec)

Here is the relevant part of the Extras column:

Using parallel query (4 columns, 1 filters, 1 exprs; 0 extra)

The query runs in less than 2 minutes when Parallel Query is used:

+------------+-------------+-------------+----------------+ | l_orderkey | revenue | o_orderdate | o_shippriority | +------------+-------------+-------------+----------------+ | 92511430 | 514726.4896 | 1995-03-06 | 0 | | 593851010 | 475390.6058 | 1994-12-21 | 0 | | 188390981 | 458617.4703 | 1995-03-11 | 0 | | 241099140 | 457910.6038 | 1995-03-12 | 0 | | 520521156 | 457157.6905 | 1995-03-07 | 0 | | 160196293 | 456996.1155 | 1995-02-13 | 0 | | 324814597 | 456802.9011 | 1995-03-12 | 0 | | 81011334 | 455300.0146 | 1995-03-07 | 0 | | 88281862 | 454961.1142 | 1995-03-03 | 0 | | 28840519 | 454748.2485 | 1995-03-08 | 0 | | 113920609 | 453897.2223 | 1995-02-06 | 0 | | 377389669 | 453438.2989 | 1995-03-07 | 0 | | 367200517 | 453067.7130 | 1995-02-26 | 0 | | 232404000 | 452010.6506 | 1995-03-08 | 0 | | 16384100 | 450935.1906 | 1995-03-02 | 0 | +------------+-------------+-------------+----------------+ 15 rows in set (1 min 53.36 sec)

I can disable Parallel Query for the session (I can use an RDS custom cluster parameter group for a longer-lasting effect):

set SESSION aurora_pq=OFF;

The query runs considerably slower without it:

+------------+-------------+-------------+----------------+ | l_orderkey | o_orderdate | revenue | o_shippriority | +------------+-------------+-------------+----------------+ | 92511430 | 1995-03-06 | 514726.4896 | 0 | ... | 16384100 | 1995-03-02 | 450935.1906 | 0 | +------------+-------------+-------------+----------------+ 15 rows in set (1 hour 25 min 51.89 sec)

This was on a db.r4.2xlarge instance; other instance sizes, data sets, access patterns, and queries will perform differently. I can also override the query optimizer and insist on the use of Parallel Query for testing purposes:

set SESSION aurora_pq_force=ON;

Things to Know
Here are a couple of things to keep in mind when you start to explore Amazon Aurora Parallel Query:

Engine Support – We are launching with support for MySQL 5.6, and are working on support for MySQL 5.7 and PostgreSQL.

Table Formats – The table row format must be COMPACT; partitioned tables are not supported.

Data Types – The TEXT, BLOB, and GEOMETRY data types are not supported.

DDL – The table cannot have any pending fast online DDL operations.

Cost – You can make use of Parallel Query at no extra charge. However, because it makes direct access to storage, there is a possibility that your IO cost will increase.

Give it a Shot
This feature is available now and you can start using it today!

Jeff;

 

Categories: Cloud

AWS Data Transfer Price Reductions – Up to 34% (Japan) and 28% (Australia)

AWS Blog - Wed, 09/19/2018 - 08:31

I’ve got good good news for AWS customers who make use of our Asia Pacific (Tokyo) and Asia Pacific (Sydney) Regions. Effective September 1, 2018 we are reducing prices for data transfer from Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), and Amazon CloudFront by up to 34% in Japan and 28% in Australia.

EC2 and S3 Data Transfer
Here are the new prices for data transfer from EC2 and S3 to the Internet:

EC2 & S3 Data Transfer Out to Internet Japan Australia Old Rate New Rate Change Old Rate New Rate Change Up to 1 GB / Month $0.000 $0.000 0% $0.000 $0.000 0% Next 9.999 TB / Month $0.140 $0.114 -19% $0.140 $0.114 -19% Next 40 TB / Month $0.135 $0.089 -34% $0.135 $0.098 -27% Next 100 TB / Month $0.130 $0.086 -34% $0.130 $0.094 -28% Greater than 150 TB / Month $0.120 $0.084 -30% $0.120 $0.092 -23%

You can consult the EC2 Pricing and S3 Pricing pages for more information.

CloudFront Data Transfer
Here are the new prices for data transfer from CloudFront edge nodes to the Internet

CloudFront Data Transfer Out to Internet Japan Australia Old Rate New Rate Change Old Rate New Rate Change Up to 10 TB / Month $0.140 $0.114 -19% $0.140 $0.114 -19% Next 40 TB / Month $0.135 $0.089 -34% $0.135 $0.098 -27% Next 100 TB / Month $0.120 $0.086 -28% $0.120 $0.094 -22% Next 350 TB / Month $0.100 $0.084 -16% $0.100 $0.092 -8% Next 524 TB / Month $0.080 $0.080 0% $0.095 $0.090 -5% Next 4 PB / Month $0.070 $0.070 0% $0.090 $0.085 -6% Over 5 PB / Month $0.060 $0.060 0% $0.085 $0.080 -6%

Visit the CloudFront Pricing page for more information.

We have also reduced the price of data transfer from CloudFront to your Origin. The price for CloudFront Data Transfer to Origin from edge locations in Australia has been reduced 20% to $0.080 per GB. This represents content uploads via POST and PUT.

Things to Know
Here are a couple of interesting things that you should know about AWS and data transfer:

AWS Free Tier – You can use the AWS Free Tier to get started with, and to learn more about, EC2, S3, CloudFront, and many other AWS services. The AWS Getting Started page contains lots of resources to help you with your first project.

Data Transfer from AWS Origins to CloudFront – There is no charge for data transfers from an AWS origin (S3, EC2, Elastic Load Balancing, and so forth) to any CloudFront edge location.

CloudFront Reserved Capacity Pricing – If you routinely use CloudFront to deliver 10 TB or more content per month, you should investigate our Reserved Capacity pricing. You can receive a significant discount by committing to transfer 10 TB or more content from a single region, with additional discounts at higher levels of usage. To learn more or to sign up, simply Contact Us.

Jeff;

 

Categories: Cloud

New – AWS Storage Gateway Hardware Appliance

AWS Blog - Tue, 09/18/2018 - 12:46

AWS Storage Gateway connects your on-premises applications to AWS storage services such as Amazon Simple Storage Service (S3), Amazon Elastic Block Store (EBS), and Amazon Glacier. It runs in your existing virtualized environment and is visible to your applications and your client operating systems as a file share, a local block volume, or a virtual tape library. The resulting hybrid storage model gives our customers the ability to use their AWS Storage Gateways for backup, archiving, disaster recovery, cloud data processing, storage tiering, and migration.

New Hardware Appliance
Today we are making Storage Gateway available as a hardware appliance, adding to the existing support for VMware ESXi, Microsoft Hyper-V, and Amazon EC2. This means that you can now make use of Storage Gateway in situations where you do not have a virtualized environment, server-class hardware or IT staff with the specialized skills that are needed to manage them. You can order appliances from Amazon.com for delivery to branch offices, warehouses, and “outpost” offices that lack dedicated IT resources. Setup (as you will see in a minute) is quick and easy, and gives you access to three storage solutions:

File Gateway – A file interface to Amazon S3, accessible via NFS or SMB. The files are stored as S3 objects, allowing you to make use of specialized S3 features such as lifecycle management and cross-region replication. You can trigger AWS Lambda functions, run Amazon Athena queries, and use Amazon Macie to discover and classify sensitive data.

Volume Gateway – Cloud-backed storage volumes, accessible as local iSCSI volumes. Gateways can be configured to cache frequently accessed data locally, or to store a full copy of all data locally. You can create EBS snapshots of the volumes and use them for disaster recovery or data migration.

Tape Gateway – A cloud-based virtual tape library (VTL), accessible via iSCSI, so you can replace your on-premises tape infrastructure, without changing your backup workflow.

To learn more about each of these solutions, read What is AWS Storage Gateway.

The AWS Storage Gateway Hardware Appliance is based on a specially configured Dell EMC PowerEdge R640 Rack Server that is pre-loaded with AWS Storage Gateway software. It has 2 Intel® Xeon® processors, 128 GB of memory, 6 TB of usable SSD storage for your locally cached data, redundant power supplies, and you can order one from Amazon.com:

If you have an Amazon Business account (they’re free) you can use a purchase order for the transaction. In addition to simplifying deployment, using this standardized configuration helps to assure consistent performance for your local applications.

Hardware Setup
As you know, I like to go hands-on with new AWS products. My colleagues shipped a pre-release appliance to me; I left it under the watchful guide of my CSO (Canine Security Officer) until I was ready to write this post:

I don’t have a server room or a rack, so I set it up on my hobby table for testing:

In addition to the appliance, I also scrounged up a VGA cable, a USB keyboard, a small monitor, and a power adapter (C13 to NEMA 5-15). The adapter is necessary because the cord included with the appliance is intended to plug in a power distribution jack commonly found in a data center. I connected it all up, turned it on and watched it boot up, then entered a new administrative password.

Following the directions in the documentation, I configured an IPV4 address, using DHCP for convenience:

I captured the IP address for use in the next step, selected Back (the UI is keyboard-driven) and then logged out. This is the only step that takes place on the local console.

Gateway Configuration
At this point I will switch from past to present, and walk you through the configuration process. As directed by the Getting Started Guide, I open the Storage Gateway Console on the same network as the appliance, select the region where I want to create my gateway, and click Get started:

I select File gateway and click Next to proceed:

I select Hardware Appliance as my host platform (I can click Buy on Amazon to purchase one if necessary), and click Next:

Then I enter the IP address of my appliance and click Connect:

I enter a name for my gateway (jbgw1), set the time zone, pick ZFS as my RAID Volume Manager, and click Activate to proceed:

My gateway is activated within a second or two and I can see it in the Hardware section of the console:

At this point I am free to use a console that is not on the same network, so I’ll switch back to my trusty WorkSpace!

Now that my hardware has been activated, I can launch the actual gateway service on it. I select the appliance, and choose Launch Gateway from the Actions menu:

I choose the desired gateway type, enter a name (fgw1) for it, and click Launch gateway:

The gateway will start off in the Offline status, and transition to Online within 3 to 5 minutes. The next step is to allocate local storage by clicking Edit local disks:

Since I am creating a file gateway, all of the local storage is used for caching:

Now I can create a file share on my appliance! I click Create file share, enter the name of an existing S3 bucket, and choose NFS or SMB, then click Next:

I configure a couple of S3 options, request creation of a new IAM role, and click Next:

I review all of my choices and click Create file share:

After I create the share I can see the commands that are used to mount it in each client environment:

I mount the share on my Ubuntu desktop (I had to install the nfs-client package first) and copy a bunch of files to it:

Then I visit the S3 bucket and see that the gateway has already uploaded the files:

Finally, I have the option to change the configuration of my appliance. After making sure that all network clients have unmounted the file share, I remove the existing gateway:

And launch a new one:

And there you have it. I installed and configured the appliance, created a file share that was accessible from my on-premises systems, and then copied files to it for replication to the cloud.

Now Available
The Storage Gateway Hardware Appliance is available now and you can purchase one today. Start in the AWS Storage Gateway Console and follow the steps above!

Jeff;

 

 

Categories: Cloud

New – AWS Systems Manager Session Manager for Shell Access to EC2 Instances

AWS Blog - Tue, 09/11/2018 - 13:03

It is a very interesting time to be a corporate IT administrator. On the one hand, developers are talking about (and implementing) an idyllic future where infrastructure as code, and treating servers and other resources as cattle. On the other hand, legacy systems still must be treated as pets, set up and maintained by hand or with the aid of limited automation. Many of the customers that I speak with are making the transition to the future at a rapid pace, but need to work in the world that exists today. For example, they still need shell-level access to their servers on occasion. They might need to kill runaway processes, consult server logs, fine-tune configurations, or install temporary patches, all while maintaining a strong security profile. They want to avoid the hassle that comes with running Bastion hosts and the risks that arise when opening up inbound SSH ports on the instances.

We’ve already addressed some of the need for shell-level access with the AWS Systems Manager Run Command. This AWS facility gives administrators secure access to EC2 instances. It allows them to create command documents and run them on any desired set of EC2 instances, with support for both Linux and Microsoft Windows. The commands are run asynchronously, with output captured for review.

New Session Manager
Today we are adding a new option for shell-level access. The new Session Manager makes the AWS Systems Manager even more powerful. You can now use a new browser-based interactive shell and a command-line interface (CLI) to manage your Windows and Linux instances. Here’s what you get:

Secure Access – You don’t have to manually set up user accounts, passwords, or SSH keys on the instances and you don’t have to open up any inbound ports. Session Manager communicates with the instances via the SSM Agent across an encrypted tunnel that originates on the instance, and does not require a bastion host.

Access Control – You use IAM policies and users to control access to your instances, and don’t need to distribute SSH keys. You can limit access to a desired time/maintenance window by using IAM’s Date Condition Operators.

Auditability – Commands and responses can be logged to Amazon CloudWatch and to an S3 bucket. You can arrange to receive an SNS notification when a new session is started.

Interactivity – Commands are executed synchronously in a full interactive bash (Linux) or PowerShell (Windows) environment

Programming and Scripting – In addition to the console access that I will show you in a moment, you can also initiate sessions from the command line (aws ssm ...) or via the Session Manager APIs.

The SSM Agent running on the EC2 instances must be able to connect to Session Manager’s public endpoint. You can also set up a PrivateLink connection to allow instances running in private VPCs (without Internet access or a public IP address) to connect to Session Manager.

Session Manager in Action
In order to use Session Manager to access my EC2 instances, the instances must be running the latest version (2.3.12 or above) of the SSM Agent. The instance role for the instances must reference a policy that allows access to the appropriate services; you can create your own or use AmazonEC2RoleForSSM. Here are my EC2 instances (sk1 and sk2 are running Amazon Linux; sk3-win and sk4-win are running Microsoft Windows):

Before I run my first command, I open AWS Systems Manager and click Preferences. Since I want to log my commands, I enter the name of my S3 bucket and my CloudWatch log group. If I enter either or both values, the instance policy must also grant access to them:

I’m ready to roll! I click Sessions, see that I have no active sessions, and click Start session to move ahead:

I select a Linux instance (sk1), and click Start session again:

The session opens up immediately:

I can do the same for one of my Windows instances:

The log streams are visible in CloudWatch:

Each stream contains the content of a single session:

In the Works
As usual, we have some additional features in the works for Session Manager. Here’s a sneak peek:

SSH Client – You will be able to create SSH sessions atop Session Manager without opening up any inbound ports.

On-Premises Access – We plan to give you the ability to access your on-premises instances (which must be running the SSM Agent) via Session Manager.

Available Now
Session Manager is available in all AWS regions (including AWS GovCloud) at no extra charge.

Jeff;

Categories: Cloud

AWS – Ready for the Next Storm

AWS Blog - Tue, 09/11/2018 - 12:08

As I have shared in the past (AWS – Ready to Weather the Storm) we take extensive precautions to help ensure that AWS will remain operational in the face of hurricanes, storms, and other natural disasters. With Hurricane Florence heading for the east coast of the United States, I thought it would be a good time to review and update some of the most important points from that post. Here’s what I want you to know:

Availability Zones – We replicate critical components of AWS across multiple Availability Zones to ensure high availability. Common points of failure, such as generators, UPS units, and air conditioning, are not shared across Availability Zones. Electrical power systems are designed to be fully redundant and can be maintained without impacting operations. The AWS Well-Architected Framework provides guidance on the proper use of multiple Availability Zones to build applications that are reliable and resilient, as does the Building Fault-Tolerant Applications on AWS whitepaper.

Contingency Planning – We maintain contingency plans and regularly rehearse our responses. We maintain a series of incident response plans and update them regularly to incorporate lessons learned and to prepare for emerging threats. In the days leading up to a known event such as a hurricane, we increase fuel supplies, update staffing plans, and add provisions to ensure the health and safety of our support teams.

Data Transfer – With a storage capacity of 100 TB per device, AWS Snowball Edge appliances can be used to quickly move large amounts of data to the cloud.

Disaster Response – When call volumes spike before, during, or after a disaster, Amazon Connect can supplement your existing call center resources and allow you to provide a better response.

Support – You can contact AWS Support if you are in need of assistance with any of these issues.

Jeff;

 

 

Categories: Cloud

Learn about AWS Services and Solutions – September AWS Online Tech Talks

AWS Blog - Mon, 09/10/2018 - 11:38

AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. Join us this month to learn about AWS services and solutions. We’ll have experts online to help answer any questions you may have.

Featured this month is our first ever fireside chat discussion. Join Debanjan Saha, General Manager of Amazon Aurora and Amazon RDS, to learn how customers are using our relational database services and leveraging database innovations.

Register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

Compute

September 24, 2018 | 09:00 AM – 09:45 AM PT – Accelerating Product Development with HPC on AWS – Learn how you can accelerate product development by harnessing the power of high performance computing on AWS.

September 26, 2018 | 09:00 AM – 10:00 AM PT – Introducing New Amazon EC2 T3 Instances – General Purpose Burstable Instances – Learn about new Amazon EC2 T3 instance types and how they can be used for various use cases to lower infrastructure costs.

September 27, 2018 | 09:00 AM – 09:45 AM PT – Hybrid Cloud Customer Use Cases on AWS: Part 2 – Learn about popular hybrid cloud customer use cases on AWS.

Containers

September 19, 2018 | 11:00 AM – 11:45 AM PT – How Talroo Used AWS Fargate to Improve their Application Scaling – Learn how Talroo, a data-driven solution for talent and jobs, migrated their applications to AWS Fargate so they can run their application without worrying about managing infrastructure.

Data Lakes & Analytics

September 17, 2018 | 11:00 AM – 11:45 AM PT – Secure Your Amazon Elasticsearch Service Domain – Learn about the multi-level security controls provided by Amazon Elasticsearch Service (Amazon ES) and how to set the security for your Amazon ES domain to prevent unauthorized data access.

September 20, 2018 | 11:00 AM – 12:00 PM PT – New Innovations from Amazon Kinesis for Real-Time Analytics – Learn about the new innovations from Amazon Kinesis for real-time analytics.

Databases

September 17, 2018 | 01:00 PM – 02:00 PM PT – Applied Live Migration to DynamoDB from Cassandra – Learn how to migrate a live Cassandra-based application to DynamoDB.

September 18, 2018 | 11:00 AM – 11:45 AM PT – Scaling Your Redis Workloads with Redis Cluster – Learn how Redis cluster with Amazon ElastiCache provides scalability and availability for enterprise workloads.

**Featured: September 20, 2018 | 09:00 AM – 09:45 AM PT – Fireside Chat: Relational Database Innovation at AWS – Join our fireside chat with Debanjan Saha, GM, Amazon Aurora and Amazon RDS to learn how customers are using our relational database services and leveraging database innovations.

DevOps

September 19, 2018 | 09:00 AM – 10:00 AM PT – Serverless Application Debugging and Delivery – Learn how to bring traditional best practices to serverless application debugging and delivery.

Enterprise & Hybrid

September 26, 2018 | 11:00 AM – 12:00 PM PT – Transforming Product Development with the Cloud – Learn how to transform your development practices with the cloud.

September 27, 2018 | 11:00 AM – 12:00 PM PT – Fueling High Performance Computing (HPC) on AWS with GPUs – Learn how you can accelerate time-to-results for your HPC applications by harnessing the power of GPU-based compute instances on AWS.

IoT

September 24, 2018 | 01:00 PM – 01:45 PM PT – Manage Security of Your IoT Devices with AWS IoT Device Defender – Learn how AWS IoT Device Defender can help you manage the security of IoT devices.

September 26, 2018 | 01:00 PM – 02:00 PM PT – Over-the-Air Updates with Amazon FreeRTOS – Learn how to execute over-the-air updates on connected microcontroller-based devices with Amazon FreeRTOS.

Machine Learning

September 17, 2018 | 09:00 AM – 09:45 AM PT – Build Intelligent Applications with Machine Learning on AWS – Learn how to accelerate development of AI applications using machine learning on AWS.

September 18, 2018 | 09:00 AM – 09:45 AM PT – How to Integrate Natural Language Processing and Elasticsearch for Better Analytics – Learn how to process, analyze and visualize data by pairing Amazon Comprehend with Amazon Elasticsearch.

September 20, 2018 | 01:00 PM – 01:45 PM PT – Build, Train and Deploy Machine Learning Models on AWS with Amazon SageMaker – Dive deep into building, training, & deploying machine learning models quickly and easily using Amazon SageMaker.

Management Tools

September 19, 2018 | 01:00 PM – 02:00 PM PT – Automated Windows and Linux Patching – Learn how AWS Systems Manager can help reduce data breach risks across your environment through automated patching.

re:Invent

September 12, 2018 | 08:00 AM – 08:30 AM PT – Episode 5: Deep Dive with Our Community Heroes and Jeff Barr – Get the insider secrets with top recommendations and tips for re:Invent 2018 from AWS community experts.

Security, Identity, & Compliance

September 24, 2018 | 11:00 AM – 12:00 PM PT – Enhanced Security Analytics Using AWS WAF Full Logging – Learn how to use AWS WAF security incidence logs to detect threats.

September 27, 2018 | 01:00 PM – 02:00 PM PT – Threat Response Scenarios Using Amazon GuardDuty – Discover methods for operationalizing your threat detection using Amazon GuardDuty.

Serverless

September 18, 2018 | 01:00 PM – 02:00 PM PT – Best Practices for Building Enterprise Grade APIs with Amazon API Gateway – Learn best practices for building and operating enterprise-grade APIs with Amazon API Gateway.

Storage

September 25, 2018 | 09:00 AM – 10:00 AM PT – Ditch Your NAS! Move to Amazon EFS – Learn how to move your on-premises file storage to Amazon EFS.

September 25, 2018 | 11:00 AM – 12:00 PM PT – Deep Dive on Amazon Elastic File System (EFS): Scalable, Reliable, and Elastic File Storage for the AWS Cloud – Get live demos and learn tips & tricks for optimizing your file storage on EFS.

September 25, 2018 | 01:00 PM – 01:45 PM PT – Integrating File Services to Power Your Media & Entertainment Workloads – Learn how AWS file services deliver high performance shared file storage for media & entertainment workflows.

Categories: Cloud

Amazon AppStream 2.0 – New Application Settings Persistence and a Quick Launch Recap

AWS Blog - Mon, 09/10/2018 - 10:11

Amazon AppStream 2.0 gives you access to Windows desktop applications through a web browser. Thousands of AWS customers, including SOLIDWORKS, Siemens, and MathWorks are already using AppStream 2.0 to deliver applications to their customers.

Today I would like to bring you up to date on some recent additions to AppStream 2.0, wrapping up with a closer look at a brand new feature that will automatically save application customizations (preferences, bookmarks, toolbar settings, connection profiles, and the like) and Windows settings between your sessions.

The recent additions to AppStream 2.0 can be divided into four categories:

User Enhancements – Support for time zone, locale, and language input, better copy/paste, and the new application persistence feature.

Admin Improvements – The ability to configure default application settings, control access to some system resources, copy images across AWS regions, establish custom branding, and share images between AWS accounts.

Storage Integration – Support for Microsoft OneDrive for Business and Google Drive for G Suite.

Regional Expansion – AppStream 2.0 recently became available in three additional AWS regions in Europe and Asia.

Let’s take a look at each item and then at application settings persistence….

User Enhancements
In June we gave AppStream 2.0 users control over the time zone, locale, and input methods. Once set, the values apply to future sessions in the same AWS region. This feature (formally known as Regional Settings) must be enabled by the AppStream 2.0 administrator as detailed in Enable Regional Settings for Your AppStream 2.0 Users.

In July we added keyboard shortcuts for copy/paste between your local device and your AppStream 2.0 sessions when using Google Chrome.

Admin Improvements
In February we gave AppStream 2.0 administrators the ability to copy AppStream 2.0 images to other AWS regions, simplifying the process of creating and managing global application deployments (to learn more, visit Tag and Copy an Image):

In March we gave AppStream 2.0 administrators additional control over the user experience, including the ability to customize the logo, color, text, and help links in the application catalog page. Read Add Your Custom Branding to AppStream 2.0 to learn more.

In May we added administrative control over the data that moves to and from the AppStream 2.0 streaming sessions. AppStream 2.0 administrators can control access to file upload, file download, printing, and copy/paste to and from local applications. Read Create AppStream 2.0 Fleets and Stacks to learn more.

In June we gave AppStream 2.0 administrators the power to configure default application settings (connection profiles, browser settings, and plugins) on behalf of their users. Read Enabling Default OS and Application Settings for Your Users to learn more.

In July we gave AppStream 2.0 administrators the ability to share AppStream 2.0 images between AWS accounts for use in the same AWS Region. To learn more, take a look at the UpdateImagePermissions API and the update-image-permissions command.

Storage Integration
Both of these launches provide AppStream 2.0 users with additional storage options for the documents that they access, edit, and create:

Launched in June, the Google Drive for G Suite support allows users to access files on a Google Drive from inside of their applications. Read Google Drive for G Suite is now enabled on Amazon AppStream 2.0 to learn how to enable this feature for an AppStream application stack.

Similiarly, the Microsoft OneDrive for Business Support that was launched in July allows users to access files stored in OneDrive for Business accounts. Read Amazon AppStream 2.0 adds support for OneDrive for Business to learn how to set this up.

 

Regional Expansion
In January we made AppStream 2.0 available in the Asia Pacific (Singapore) and Asia Pacific (Sydney) Regions.

In March we made AppStream 2.0 available in the Europe (Frankfurt) Region.

See the AWS Region Table for the full list of regions where AppStream 2.0 is available.

Application Settings Persistence
With the past out of the way, let’s take a look at today’s new feature, Application Settings Persistence!

As you can see from the launch recap above, AppStream 2.0 already saves several important application and system settings between sessions. Today we are adding support for the elements that make up the Windows Roaming Profile. This includes:

Windows Profile – The contents of C:\users\user_name\appdata .

Windows Profile Folder – The contents of C:\users\user_name .

Windows Registry – The tree of registry entries rooted at HKEY_CURRENT_USER .

This feature must be enabled by the AppStream 2.0 administrator. The contents of the Windows Roaming Profile are stored in an S3 bucket in the administrator’s AWS account, with an initial storage allowance (easily increased) of up to 1 GB per user. The S3 bucket is configured for Server Side Encryption with keys managed by S3. Data moves between AppStream 2.0 and S3 across a connection that is protected by SSL. The administrator can choose to enable S3 versioning to allow recovery from a corrupted profile.

Application Settings Persistence can be enabled for an existing stack, as long as it is running the latest version of the AppStream 2.0 Agent. Here’s how it is enabled when creating a new stack:

Putting multiple stacks in the same settings group allows them to share a common set of user settings. The settings are applied when the user logs in, and then persisted back to S3 when they log out.

This feature is available now and AppStream 2.0 administrators can enable it today. The only cost is for the S3 storage consumed by the stored profiles, charged at the usual S3 prices.

Jeff;

PS – Follow the AWS Desktop and Application Streaming Blog to make sure that you know about new features as quickly as possible.

 

Categories: Cloud

AWS X-Ray Now Supports Amazon API Gateway and New Sampling Rules API

AWS Blog - Thu, 09/06/2018 - 12:13

My colleague Jeff first introduced us to AWS X-Ray almost 2 years ago in his post from AWS re:Invent. If you’re not already aware, AWS X-Ray helps developers analyze and debug everything from simple web apps to large and complex distributed microservices, both in production and in development. Since X-Ray became generally available in 2017, we’ve iterated rapidly on customer feedback and continued to make enhancements to the service like encryption with AWS Key Management Service (KMS), new SDKs and language support (Python!), open sourcing the daemon, and latency visualization tools. Today, we’re adding two new features:

    • Support for Amazon API Gateway, making it easier to trace and analyze requests as they travel through your APIs to the underlying services.
    • We also recently launched support for controlling sampling rules in the AWS X-Ray console and API.

Let me show you how to enable tracing for an API.

Enabling X-Ray Tracing

I’ll start with a simple API deployed to API Gateway. I’ll add two endpoints. One to push records into Amazon Kinesis Data Streams and one to invoke a simple AWS Lambda function. It looks something like this:

After deploying my API, I can go to the Stages sub console, and select a specific stage, like “dev” or “production”. From there, I can enable X-Ray tracing by navigating to the Logs/Tracing tab, selecting Enable X-Ray Tracing and clicking Save Changes.

After tracing is enabled, I can hop over to the X-Ray console to look at my sampling rules in the new Sampling interface.

I can modify the rules in the console and, of course, with the CLI, SDKs, or API. Let’s take a brief interlude to talk about sampling rules.

Sampling Rules
The sampling rules allow me to customize, at a very granular level, the requests and traces I want to record. This allows me to control the amount of data that I record on-the-fly, across code running anywhere (AWS Lambda, Amazon ECS, Amazon Elastic Compute Cloud (EC2), or even on-prem) – all without having to rewrite any code or redeploy an application. The default rule that is pictured above states that it will record the first request each second, and five percent of any additional requests. We talk about that one request each second as the reservoir, which ensures that at least one trace is recorded each second. The five percent of additional requests is what we refer to as the fixed rate. Both the reservoir and the fixed rate are configurable. If I set the reservoir size to 50 and the fixed rate to 10%, then if 100 requests per second match the rule, the total number of requests sampled is 55 requests per second. Configuring my X-Ray recorders to read sampling rules from the X-Ray service allows the X-Ray service to maintain the sampling rate and reservoir across all of my distributed compute. If I want to enable this functionality, I just install the latest version of the X-Ray SDK and daemon on my instances. At the moment only the GA SDKs are supported with support for Ruby and Go on the way. With services like API Gateway and Lambda, I can configure everything right in the X-Ray console or API. There’s a lot more detail on this feature in the documentation, and I suggest taking the time to check it out.

While I can, of course, use the sampling rules to control costs, the dynamic nature and the granularity of the rules is also extremely powerful for debugging production systems. If I know one particular URL or service is going to need extra monitoring I can specify that as part of the sampling rule. I can filter on individual stages of APIs, service types, service names, hosts, ARNs, HTTP methods, segment attributes, and more. This lets me quickly examine distributed microservices at 30,000 feet, identify issues, adjust some rules, and then dive deep into production requests. I can use this to develop insights about problems occurring in the 99th percentile of my traffic and deliver a better overall customer experience. I remember building and deploying a lot of ad-hoc instrumentation over the years, at various companies, to try to support something like this, and I don’t think I was ever particularly successful. Now that I can just deploy X-Ray and adjust sampling rules centrally, it feels like I have a debugging crystal ball. I really wish I’d had this tool 5 years ago.

Ok, enough reminiscing, let’s hop back to the walkthrough.

I’ll stick with the default sampling rule for now. Since we’ve enabled tracing and I’ve got some requests running, after about 30 seconds I can refresh my service map and look at the results. I can click on any node to view the traces directly or drop into the Traces sub console to look at all of the traces.

From there, I can see the individual URLs being triggered, the source IPs, and various other useful metrics.

If I want to dive deeper, I can write some filtering rules in the search bar and find a particular trace. An API Gateway segment has a few useful annotations that I can use to filter and group like the API ID and stage. This is what a typical API Gateway trace might look like.

Adding API Gateway support to X-Ray gives us end-to-end production traceability in serverless environments and sampling rules give us the ability to adjust our tracing in real time without redeploying any code. I had the pleasure of speaking with Ashley Sole from Skyscanner, about how they use AWS X-Ray at the AWS Summit in London last year, and these were both features he asked me about earlier that day. I hope this release makes it easier for Ashley and other developers to debug and analyze their production applications.

Available Now

Support for both of these features is available, today, in all public regions that have both API Gateway and X-Ray. In fact, X-Ray launched their new console and API last week so you may have already seen it! You can start using it right now. As always, let us know what you think on Twitter or in the comments below.

Randall

Categories: Cloud

Extending AWS CloudFormation with AWS Lambda Powered Macros

AWS Blog - Thu, 09/06/2018 - 11:47

Today I’m really excited to show you a powerful new feature of AWS CloudFormation called Macros. CloudFormation Macros allow developers to extend the native syntax of CloudFormation templates by calling out to AWS Lambda powered transformations. This is the same technology that powers the popular Serverless Application Model functionality but the transforms run in your own accounts, on your own lambda functions, and they’re completely customizable. CloudFormation, if you’re new to AWS, is an absolutely essential tool for modeling and defining your infrastructure as code (YAML or JSON). It is a core building block for all of AWS and many of our services depend on it.

There are two major steps for using macros. First, we need to define a macro, which of course, we do with a CloudFormation template. Second, to use the created macro in our template we need to add it as a transform for the entire template or call it directly. Throughout this post, I use the term macro and transform somewhat interchangeably. Ready to see how this works?

Creating a CloudFormation Macro

Creating a macro has two components: a definition and an implementation. To create the definition of a macro we create a CloudFormation resource of a type AWS::CloudFormation::Macro, that outlines which Lambda function to use and what the macro should be called.

Type: "AWS::CloudFormation::Macro" Properties: Description: String FunctionName: String LogGroupName: String LogRoleARN: String Name: String

The Name of the macro must be unique throughout the region and the Lambda function referenced by FunctionName must be in the same region the macro is being created in. When you execute the macro template, it will make that macro available for other templates to use. The implementation of the macro is fulfilled by a Lambda function. Macros can be in their own templates or grouped with others, but you won’t be able to use a macro in the same template you’re registering it in. The Lambda function receives a JSON payload that looks like something like this:

{ "region": "us-east-1", "accountId": "$ACCOUNT_ID", "fragment": { ... }, "transformId": "$TRANSFORM_ID", "params": { ... }, "requestId": "$REQUEST_ID", "templateParameterValues": { ... } }

The fragment portion of the payload contains either the entire template or the relevant fragments of the template – depending on how the transform is invoked from the calling template. The fragment will always be in JSON, even if the template is in YAML.

The Lambda function is expected to return a simple JSON response:

{ "requestId": "$REQUEST_ID", "status": "success", "fragment": { ... } }

The requestId needs to be the same as the one received in the input payload, and if status contains any value other than success (case-insensitive) then the changeset will fail to create. Now, fragment must contain the valid CloudFormation JSON of the transformed template. Even if your function performed no action it would still need to return the fragment for it to be included in the final template.

Using CloudFormation Macros


To use the macro we simply call out to Fn::Transform with the required parameters. If we want to have a macro parse the whole template we can include it in our list of transforms in the template the same way we would with SAM: Transform: [Echo]. When we go to execute this template the transforms will be collected into a changeset, by calling out to each macro’s specified function and returning the final template.

Let’s imagine we have a dummy Lambda function called EchoFunction, it just logs the data passed into it and returns the fragments unchanged. We define the macro as a normal CloudFormation resource, like this:

EchoMacro: Type: "AWS::CloudFormation::Macro" Properties: FunctionName: arn:aws:lambda:us-east-1:1234567:function:EchoFunction Name: EchoMacro

The code for the lambda function could be as simple as this:

def lambda_handler(event, context): print(event) return { "requestId": event['requestId'], "status": "success", "fragment": event["fragment"] }

Then, after deploying this function and executing the macro template, we can invoke the macro in a transform at the top level of any other template like this:

AWSTemplateFormatVersion: 2010-09-09 Transform: [EchoMacro, AWS::Serverless-2016-10-31] Resources: FancyTable: Type: AWS::Serverless::SimpleTable

The CloudFormation service creates a changeset for the template by first calling the Echo macro we defined and then the AWS::Serverless transform. It will execute the macros listed in the transform in the order they’re listed.

We could also invoke the macro using the Fn::Transform intrinsic function which allows us to pass in additional parameters. For example:

AWSTemplateFormatVersion: 2010-09-09 Resources: MyS3Bucket: Type: 'AWS::S3::Bucket' Fn::Transform: Name: EchoMacro Parameters: Key: Value

The inline transform will have access to all of its sibling nodes and all of its children nodes. Transforms are processed from deepest to shallowest which means top-level transforms are executed last. Since I know most of you are going to ask: no you cannot include macros within macros – but nice try.

When you go to execute the CloudFormation template it would simply ask you to create a changeset and you could preview the output before deploying.

Example Macros

We’re launching a number of reference macros to help developers get started and I expect many people will publish others. These four are the winners from a little internal hackathon we had prior to releasing this feature:

Name Description Author PyPlate Allows you to inline Python in your templates Jay McConnel – Partner SA ShortHand Defines a short-hand syntax for common cloudformation resources Steve Engledow – Solutions Builder StackMetrics Adds cloudwatch metrics to stacks Steve Engledow and Jason Gregson – Global SA String Functions Adds common string functions to your templates Jay McConnel – Partner SA

Here are a few ideas I thought of that might be fun for someone to implement:

If you end up building something cool I’m more than happy to tweet it out!

Available Now

CloudFormation Macros are available today, in all AWS regions that have AWS Lambda. There is no additional CloudFormation charge for Macros meaning you are only billed normal AWS Lambda function charges. The documentation has more information that may be helpful.

This is one of my favorite new features for CloudFormation and I’m excited to see some of the amazing things our customers will build with it. The real power here is that you can extend your existing infrastructure as code with code. The possibilities enabled by this new functionality are virtually unlimited.

Randall

Categories: Cloud

Chat with the Alexa Prize Finalists Today

AWS Blog - Tue, 09/04/2018 - 11:24

The Alexa Prize is an annual competition designed to spur academic research and development in the field of conversational artificial intelligence. This year, students are working to build socialbots that can engage in a fun, high-quality conversation on popular societal topics for up to 20 minutes. In order to succeed at this task, the teams must innovate in a broad range of areas including knowledge acquisition, natural language understanding, natural language generation, context modeling, common-sense reasoning, and dialog planning. They use the Alexa Skills Kit (ASK) to construct their bot and to receive real-time feedback on its performance.

Last month the socialbots from Heriot-Watt University (Alana), Czech Technical University (Alquist), and UC Davis (Gunrock) were chosen as the finalists (watch the Twitch stream to learn more). The competition was tough, with points assigned for the potential scientific contribution to the field, the technical merit of the approach, the overall novelty of the idea, and the team’s ability to deliver on their vision.

Time to Chat
We’re now ready for the final round.

Step up to your nearest Alexa-powered device and say “Alexa, let’s chat!” You will be connected to one of the three socialbots (chosen at random) and can converse with it for as long as you would like. When you are through, say “Alexa stop,” and rate the socialbot when prompted. You can also provide additional feedback for the team. We’ll announce the winner at AWS re:Invent 2018 in Las Vegas.

Jeff;

PS – If you are ready to build your very own Alexa Skill, check out the Alexa Skills Kit Tutorials and subscribe to the Alexa Blogs.

 

Categories: Cloud

In the Works – Amazon RDS on VMware

AWS Blog - Wed, 08/29/2018 - 10:52

Database administrators spend a lot of time provisioning hardware, installing and patching operating systems and databases, and managing backups. All of this undifferentiated heavy lifting keeps the lights on but often takes time away from higher-level efforts that have a higher return on investment. For many years, Amazon Relational Database Service (RDS) has taken care of this heavy lifting, and simplified the use of MariaDB, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL in the cloud. AWS customers love the high availability, scalability, durability, and management simplicity of RDS.

Earlier this week we announced that we are working to bring the benefits of RDS to on-premises virtualized environments, to hybrid environments, and to VMware Cloud on AWS. You will be able to provision new on-premises database instances in minutes with a couple of clicks, make backups to on-premises or cloud-based storage, and to establish read replicas running on-premises or in the AWS cloud. Amazon RDS on vSphere will take care of OS and database patching, and will let you migrate your on-premises databases to AWS with a single click.

Inside Amazon RDS on VMware
I sat down with the development team to learn more about Amazon RDS on VMware. Here’s a quick summary of what I learned:

Architecture – Your vSphere environment is effectively a private, local AWS Availability Zone (AZ), connected to AWS across a VPN tunnel running over the Internet or a AWS Direct Connect connection. You will be able to create Multi-AZ instances of RDS that span vSphere clusters.

Backups – Backups can make use of local (on-premises storage) or AWS, and are subject to both local and AWS retention policies. Backups are portable, and can be used to create an in-cloud Amazon RDS instance. Point in Time Recovery (PITR) will be supported, as long as you restore to the same environment.

Management – You will be able to manage your Amazon RDS on vSphere instances from the Amazon RDS Console and from vCenter. You will also be able to use the Amazon RDS CLI and the Amazon RDS APIs.

Regions – We’ll be launching in the US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Frankfurt) Regions, with more to come over time.

Register for the Preview
If you would like to be among the first to take Amazon RDS on VMware for a spin, you can Register for the Preview. I’ll have more information (and a hands-on blog post) in the near future, so stay tuned!

Jeff;

 

Categories: Cloud

New – Over-the-Air (OTA) Updates for Amazon FreeRTOS

AWS Blog - Tue, 08/28/2018 - 08:08

Amazon FreeRTOS is an operating system for the microcontrollers that power connected devices such as appliances, fitness trackers, industrial sensors, smart utility meters, security systems, and the like. Designed for use in small, low-powered devices, Amazon FreeRTOS extends the FreeRTOS kernel with libraries for communication with cloud services such as AWS IoT Core and with more powerful edge devices that are running AWS Greengrass (to learn more, read Announcing Amazon FreeRTOS – Enabling Billions of Devices to Securely Benefit from the Cloud).

Unlike more powerful, general-purpose computers that include generous amounts of local memory and storage, and the ability to load and run code on demand, microcontrollers are often driven by firmware that is loaded at the factory and then updated with bug fixes and new features from time to time over the life of the device. While some devices are able to accept updates in the field and while they are running, others must be disconnected, removed from service, and updated manually. This can be disruptive, inconvenient, and expensive, not to mention time-consuming.

As usual, we want to provide a better solution for our customers!

Over-the-Air Updates
Today we are making Amazon FreeRTOS even more useful with the addition of an over-the-air update mechanism that can be used to deliver updates to devices in the field. Here are the most important properties of this new feature:

Security – Updates can be signed by an integrated code signer, streamed to the target device across a TLS-protected connection, and then verified on the target device in order to guard against corrupt, unauthorized, fraudulent updates.

Fault Tolerance – In order to guard against failed updates that can result in a useless, “bricked” device, the update process is resilient and able to handle partial updates from taking effect, leaving the device in an operable state.

Scalability – Device fleets often contain thousands or millions of devices, and can be divided into groups for updating purposes, powered by AWS IoT Device Management.

Frugality – Microcontrollers have limited amounts of RAM (often 128KB or so) and compute power. Amazon FreeRTOS makes the most of these scarce resources by using a single TLS connection for updates and other AWS IoT Core communication, and by using the lightweight MQTT protocol.

Each device must include the OTA Updates Library. This library contains an agent that listens for update jobs and supervises the update process.

OTA in Action
I don’t happen to have a fleet of devices deployed, so I’ll have to limit this post to the highlights and direct you to the OTA Tutorial for more info.

Each update takes the form of an AWS IoT job. A job specifies a list of target devices (things and/or thing groups) and references a job document that describes the operations to be performed on each target. The job document, in turn, points to the code or data to be deployed for the update, and specifies the desired code signing option. Code signing ensures that the deployed content is genuine; you can sign the content yourself ahead of time or request that it be done as part of the job.

Jobs can be run once (a snapshot job), or whenever a change is detected in a target (a continuous job). Continuous jobs can be used to onboard or upgrade new devices as they are added to a thing group.

After the job has been created, AWS IoT will publish an OTA job message via MQTT. The OTA Updates library will download the signed content in streaming fashion, supervise the update, and report status back to AWS IoT.

You can create and manage jobs from the AWS IoT Console, and can also build your own tools using the CLI and the API. I open the Console and click Create a job to get started:

Then I click Create OTA update job:

I select and sign my firmware image:

From there I would select my things or thing groups, initiate the job, and monitor the status:

Again, to learn more, check out the tutorial.

This new feature is available now and you can start using it today.

Jeff;

Categories: Cloud

Amazon DynamoDB – Features to Power Your Enterprise

AWS Blog - Mon, 08/27/2018 - 05:57

I first told you about Amazon DynamoDB in early 2012, and said:

We want you to think big, to dream big dreams, and to envision (and then build) data-intensive applications that can scale from zero users up to tens or hundreds of millions of users before you know it. We want you to succeed, and we don’t want your database to get in the way. Focus on your app and on building a user base, and leave the driving to us.

Six years later, DynamoDB handles trillions of requests per day, and is the NoSQL database of choice for more than 100,000 AWS customers.

Every so often I like to take a look back and summarize some of our most recent launches. I want to make sure that you don’t miss something of importance due to our ever-quickening pace of innovation, and I also like to put the individual releases into a larger context.

For the Enterprise
Many of our recent DynamoDB launches have been driven by the needs of our enterprise customers. For example:

Global Tables – Announced last November, global tables exist in two or more AWS Regions, with fast automated replication across Regions.

Encryption – Announced in February, tables can be encrypted at rest with no overhead.

Point-in-Time Recovery – Announced in March, continuous backups support the ability to restore a table to a prior state with a resolution of one second, going up to 35 days into the past.

DynamoDB Service Level Agreement – Announced in June, the SLA defines availability expectations for DynamoDB tables.

Adaptive Capacity – Though not a new feature, a popular recent blog post explained how DynamoDB automatically adapts to changing access patterns.

Let’s review each of these important features. Even though I have flagged them as being of particular value to enterprises, I am confident that all DynamoDB users will find them valuable.

Global Tables
Even though I try not to play favorites when it comes to services or features, I have to admit that I really like this one. It allows you to create tables that are automatically replicated across two or more AWS Regions, with full support for multi-master writes, all with a couple of clicks. You get an additional level of redundancy (tables are also replicated across three Availability Zones in each region) and fast read/write performance that can scale to meet the needs of the most demanding global apps.

Global tables can be used in nine AWS Regions (we recently added support for three more) and can be set up when you create the table:

To learn more, read Amazon DynamoDB Update – Global Tables and On-Demand Backup.

Encryption
Our customers store sensitive data in DynamoDB and need to protect it in order to achieve their compliance objectives. The encryption at rest feature protects data stored in tables, local secondary indexes, and global secondary indexes using AES-256. The encryption adds no storage overhead, is completely transparent, and does not affect latency. It can be enabled with one click when you create a new table:

To learn more, read New – Encryption at Rest for DynamoDB.

Point-in-Time Recovery
Even when you take every possible operational precaution, you may still do something regrettable to your production database. When (not if) that happens. you can use the DynamoDB point-in-time recovery feature to turn back time, restoring the database to its state as of up to 35 days earlier. Assuming that you enabled continuous backups for the table, restoration is as simple as choosing the desired point in time:

To learn more, read New – Amazon DynamoDB Continuous Backups and Point-in-Time Recovery (PITR).

Service Level Agreement
If you are building your applications on DynamoDB and relying on it to store your mission-critical data, you need to know what kind of availability to expect. The DynamoDB Service Level Agreement (SLA) promises 99.99% availability for tables in a single region and 99.999% availability for global tables, within a monthly billing cycle. The SLA provides service credits if the availability promise is not met.

Adaptive Capacity
DynamoDB does a lot of work behind the scenes to adapt to varying workloads. For example, as your workload scales and evolves, DynamoDB automatically reshards and dynamically redistributes data between multiple storage partitions in response to changes in read throughput, write throughput, and storage.

Also, DynamoDB uses an adaptive capacity mechanism to address situations where the distribution of data across the storage partitions of a table has become somewhat uneven. This mechanism allows one partition to consume more than its fair share of the overall provisioned capacity for the table for as long as necessary, as long as the overall use of provisioned capacity remains within bounds. With change, the advice that we gave in the past regarding key distribution is not nearly as important.

To learn more about this feature and to see how it can help to compensate for surprising or unusual access patterns to your DynamoDB tables, read How Amazon DynamoDB adaptive capacity accommodates uneven data access patterns.

And There You Go
I hope that you have enjoyed this quick look at some of the most recent enterprise-style features for DynamoDB. We’ve got more on the way, so stay tuned for future updates.

Jeff;

PS – Last week we released a DynamoDB local Docker image that you can use in your containerized development environment and for CI testing.

Categories: Cloud

Amazon Lightsail Update – More Instance Sizes and Price Reductions

AWS Blog - Thu, 08/23/2018 - 15:03

Amazon Lightsail gives you access to the power of AWS, with the simplicity of a VPS (Virtual Private Server). You choose a configuration from a menu and launch a virtual machine (an instance) preconfigured with SSD-based storage, DNS management, and a static IP address. You can use Linux or Windows, and can even choose between eleven Linux-powered blueprints that contain ready-to-run copies of popular web, e-commerce, and development tools:

On the Linux/Unix side, you now have six options, including CentOS:

The monthly fee for each instance includes a generous data transfer allocation, giving you the ability to host web sites, blogs, online stores and whatever else you can dream up!

Since the launch of Lightsail in late 2016, we’ve done our best to listen and respond to customer feedback. For example:

October 2017Microsoft Windows – This update let you launch Lightsail instances running Windows Server 2012 R2, Windows Server 2016, and Windows Server 2016 with SQL Server 2016 Express. This allowed you to build, test, and deploy .NET and Windows applications without having to set up or run any infrastructure.

November 2017Load Balancers & Certificate Management – This update gave you the ability to build highly scalable applications that use load balancers to distribute traffic to multiple Lightsail instances. It also gave you access to free SSL/TLS certificates and a simple, integrated tool to request and validate them, along with an automated renewal mechanism.

November 2017Additional Block Storage – This update let you extend your Lightsail instances with additional SSD-backed storage, with the ability to attach up to 15 disks (each holding up to 16 TB) to each instance. The additional storage is automatically replicated and encrypted.

May 2018Additional Regions – This update let you launch Lightsail instances in the Canada (Central), Europe (Paris), and Asia Pacific (Seoul) Regions, bringing the total region count to 13, and giving you lots of geographic flexibility.

So that’s where we started and how we got here! What’s next?

And Now for the Updates
Today we are adding two more instances sizes at the top end of the range and reducing the prices for the existing instances by up to 50%.

Here are the new instance sizes:

16 GB – 16 GB of memory, 4 vCPUs, 320 GB of storage, and 6 TB of data transfer.

32 GB – 32 GB of memory, 8 vCPUs, 640 GB of storage, and 7 TB of data transfer.

Here are the monthly prices (billed hourly) for Lightsail instances running Linux:

512 MB
1 GB 2 GB 4 GB 8 GB 16 GB 32 GB Old $5.00 $10 $20 $40 $80 – – New $3.50 $5 $10 $20 $40 $80 $160

And for Lightsail instances running Windows:

512 MB 1 GB 2 GB 4 GB 8 GB 16 GB 32 GB Old $10 $17 $30 $55 $100 – – New $8 $12 $20 $40 $70 $120 $240

These reductions are effective as of August 1, 2018 and take place automatically, with no action on your part.

From Our Customers
WordPress power users, developers, entrepreneurs, and people who need a place to host their personal web site are all making great use of Lightsail. The Lightsail team is always thrilled to see customer feedback on social media and shared a couple of recent tweets with me as evidence!

Emil Uzelac (@emiluzelac) is a well-respected member of the WordPress community, especially in the area of WordPress theme development and reviews. When he tried Lightsail he was super impressed with the speed of our instances calling them “by far the fastest I’ve tried”:

As an independent developer and SaaS cofounder, Mike Rogers (@mikerogers0) hasn’t spent a lot of time working with infrastructure. However, when he moved some of his Ruby on Rails projects over to Lightsail, he realized that it was easy (and actually fun) to make the move:

Stephanie Davis (@StephanieMDavis) is a business intelligence developer and honey bee researcher who wanted to find a new home for her writings. She settled on Lightsail, and after it was all up and running she had a “. . . a much, much better grasp of the AWS cloud infrastructure and an economical, slick web host”:

If you have your own Lightsail success story to share, could I ask you to tweet it and hashtag it with #PoweredByLightsail ? I can’t wait to read it!

Some new Lightsail Resources
While I have got your attention, I’d like to share some helpful videos with you!

Deploying a MEAN stack Application on Amazon Lightsail – AWS Developer Advocate Mike Coleman shows you how to deploy a MEAN stack (MongoDB, Express.js, Angular, Node.js) on Lightsail:

Deploying a WordPress Instance on Amazon Lightsail – Mike shows you how to deploy WordPress:

Deploying Docker Containers on Amazon Lightsail – Mike shows you how to use Docker containers:

Jeff;

Categories: Cloud

Pages

Subscribe to LAMP, Database and Cloud Technical Information aggregator - Cloud


Main menu 2

by Dr. Radut