Jump to Navigation

Feed aggregator

Presenting Amazon Sumerian: An easy way to create VR, AR, and 3D experiences

AWS Blog - Mon, 11/27/2017 - 00:03

If you have had an opportunity to read any of my blog posts or attended any session I’ve conducted at various conferences, you are probably aware that I am definitively a geek girl. I am absolutely enamored with all of the latest advancements that have been made in technology areas like cloud, artificial intelligence, internet of things and the maker space, as well as, with virtual reality and augmented reality. In my opinion, it is a wonderful time to be a geek. All the things that we dreamed about building while we sweated through our algorithms and discrete mathematics classes or the technology we marveled at when watching Star Wars and Star Trek are now coming to fruition.  So hopefully this means it will only be a matter of time before I can hyperdrive to other galaxies in space, but until then I can at least build the 3D virtual reality and augmented reality characters and images like those featured in some of my favorite shows.

Amazon Sumerian provides tools and resources that allows anyone to create and run augmented reality (AR), virtual reality (VR), and 3D applications with ease.  With Sumerian, you can build multi-platform experiences that run on hardware like the Oculus, HTC Vive, and iOS devices using WebVR compatible browsers and with support for ARCore on Android devices coming soon.

This exciting new service, currently in preview, delivers features to allow you to design highly immersive and interactive 3D experiences from your browser. Some of these features are:

  • Editor: A web-based editor for constructing 3D scenes, importing assets, scripting interactions and special effects, with cross-platform publishing.
  • Object Library: a library of pre-built objects and templates.
  • Asset Import: Upload 3D assets to use in your scene. Sumerian supports importing FBX, OBJ, and coming soon Unity projects.
  • Scripting Library: provides a JavaScript scripting library via its 3D engine for advanced scripting capabilities.
  • Hosts: animated, lifelike 3D characters that can be customized for gender, voice, and language.
  • AWS Services Integration: baked in integration with Amazon Polly and Amazon Lex to add speech and natural language to into Sumerian hosts. Additionally, the scripting library can be used with AWS Lambda allowing use of the full range of AWS services.

Since Amazon Sumerian doesn’t require you to have 3D graphics or programming experience to build rich, interactive VR and AR scenes, let’s take a quick run to the Sumerian Dashboard and check it out.

From the Sumerian Dashboard, I can easily create a new scene with a push of a button.

A default view of the new scene opens and is displayed in the Sumerian Editor. With the Tara Blog Scene opened in the editor, I can easily import assets into my scene.

I’ll click the Import Asset button and pick an asset, View Room, to import into the scene. With the desired asset selected, I’ll click the Add button to import it.

Excellent, my asset was successfully imported into the Sumerian Editor and is shown in the Asset panel.  Now, I have the option to add the View Room object into my scene by selecting it in the Asset panel and then dragging it onto the editor’s canvas.

I’ll repeat the import asset process and this time I will add the Mannequin asset to the scene.

Additionally, with Sumerian, I can add scripting to Entity assets to make my scene even more exciting by adding a ScriptComponent to an entity and creating a script.  I can use the provided built-in scripts or create my own custom scripts. If I create a new custom script, I will get a blank script with some base JavaScript code that looks similar to the code below.

'use strict'; /* global sumerian */ //This is Me-- trying out the custom scripts - Tara var setup = function (args, ctx) { // Called when play mode starts. }; var fixedUpdate = function (args, ctx) { // Called on every physics update, after setup(). }; var update = function (args, ctx) { // Called on every render frame, after setup(). }; var lateUpdate = function (args, ctx) { // Called after all script "update" methods in the scene has been called. }; var cleanup = function (args, ctx) { // Called when play mode stops. }; var parameters = [];

Very cool, I just created a 3D scene using Amazon Sumerian in a matter of minutes and I have only scratched the surface.

Summary

The Amazon Sumerian service enables you to create, build, and run virtual reality (VR), augmented reality (AR), and 3D applications with ease.  You don’t need any 3D graphics or specialized programming knowledge to get started building scenes and immersive experiences.  You can import FBX, OBJ, and Unity projects in Sumerian, as well as upload your own 3D assets for use in your scene. In addition, you can create digital characters to narrate your scene and with these digital assets, you have choices for the character’s appearance, speech and behavior.

You can learn more about Amazon Sumerian and sign up for the preview to get started with the new service on the product page.  I can’t wait to see what rich experiences you all will build.

Tara

 

Categories: Cloud

PHP 7.1.12 Released

PHP News - Thu, 11/23/2017 - 22:02
Categories: PHP

PHP 7.0.26 Released

PHP News - Thu, 11/23/2017 - 04:00
Categories: PHP

The AWS Cloud Goes Underground at re:Invent

AWS Blog - Wed, 11/22/2017 - 16:35

As you wander through the AWS re:Invent campus, take a minute to think about your expectations for all of the elements that need to come together…

Starting with the location, my colleagues have chosen the best venues, designed the sessions, picked the speakers, laid out the menu, selected the color schemes, programmed or printed all of the signs, and much more, all with the goal of creating an optimal learning environment for you and tens of thousands of other AWS customers.

However, as is often the case, the part that you can see is just a part of the picture. Behind the scenes, people, processes, plans, and systems come together to put all of this infrastructure in to place and to make it run so smoothly that you don’t usually notice it.

Today I would like to tell you about a mission-critical aspect of the re:Invent infrastructure that is actually underground. In addition to providing great Wi-Fi for your phones, tablets, cameras, laptops, and other devices, we need to make sure that a myriad of events, from the live-streamed keynotes, to the live-streamed keynotes and the WorkSpaces-powered hands-on labs are well-connected to each other and to the Internet. With events running at hotels up and down the Las Vegas Strip, reliable, low-latency connectivity is essential!

Thank You CenturyLink / Level3
Over the years we have been working with the great folks at Level3 to make this happen. They recently became part of CenturyLink and are now the Official Network Sponsor of re:Invent, responsible for the network fiber, circuits, and services that tie the re:Invent campus together.

To make this happen, they set up two miles of dark fiber beneath the Strip, routed to multiple Availability Zones in two separate AWS Regions. The Sands Expo Center is equipped with redundant 10 gigabit connections and the other venues (Aria, MGM, Mirage, and Wynn) are each provisioned for 2 to 10 gigabits, meaning that over half of the Strip is enabled for Direct Connect. According to the IT manager at one of the facilities, this may be the largest temporary hybrid network ever configured in Las Vegas.

On the Wi-Fi side, showNets is plugged in to the same network; your devices are talking directly to Direct Connect access points (how cool is that?).

Here’s a simplified illustration of how it all fits together:

The CenturyLink team will be onsite at re:Invent and will be tweeting live network stats throughout the week.

I hope you have enjoyed this quick look behind the scenes and beneath the street!

Jeff;

Categories: Cloud

An update on the Workflow Initiative for Drupal 8.4/8.5

Drupal News - Wed, 11/22/2017 - 09:57

This blog has been re-posted with permission from Dries Buytaert's blog. Please leave your comments on the original post.

Over the past weeks I have shared an update on the Media Initiative and an update on the Layout Initiative. Today I wanted to give an update on the Workflow Initiative.

Creating great software doesn't happen overnight; it requires a desire for excellence and a disciplined approach. Like the Media and Layout Initiatives, the Workflow Initiative has taken such an approach. The disciplined and steady progress these initiative are making is something to be excited about.

8.4: The march towards stability

As you might recall from my last Workflow Initiative update, we added the Content Moderation module to Drupal 8.2 as an experimental module, and we added the Workflows module in Drupal 8.3 as well. The Workflows module allows for the creation of different publishing workflows with various states (e.g. draft, needs legal review, needs copy-editing, etc) and the Content Moderation module exposes these workflows to content authors.

As of Drupal 8.4, the Workflows module has been marked stable. Additionally, the Content Moderation module is marked beta in Drupal 8.4, and is down to two final blockers before marking stable. If you want to help with that, check out the Content Moderation module roadmap.

8.4: Making more entity types revisionable

To advance Drupal's workflow capabilities, more of Drupal's entity types needed to be made "revisionable". When content is revisionable, it becomes easier to move it through different workflow states or to stage content. Making more entity types revisionable is a necessary foundation for better content moderation, workflow and staging capabilities. But it was also hard work and took various people over a year of iterations — we worked on this throughout the Drupal 8.3 and Drupal 8.4 development cycle.

When working through this, we discovered various adjacent bugs (e.g. bugs related to content revisions and translations) that had to be worked through as well. As a plus, this has led to a more stable and reliable Drupal, even for those who don't use any of the workflow modules. This is a testament to our desire for excellence and disciplined approach.

8.5+: Looking forward to workspaces

While these foundational improvements in Drupal 8.3 and Drupal 8.4 are absolutely necessary to enable better content moderation and content staging functionality, they don't have much to show for in terms of user experience changes. Now a lot of this work is behind us, the Workflow Initiative changed its focus to stabilizing the Content Moderation module, but is also aiming to bring the Workspace module into Drupal core as an experimental module.

The Workspace module allows the creation of multiple environments, such as "Staging" or "Production", and allows moving collections of content between them. For example, the "Production" workspace is what visitors see when they visit your site. Then you might have a protected "Staging" workspace where content editors prepare new content before it's pushed to the Production workspace.

While workflows for individual content items are powerful, many sites want to publish multiple content items at once as a group. This includes new pages, updated pages, but also changes to blocks and menu items — hence our focus on making things like block content and menu items revisionable. 'Workspaces' group all these individual elements (pages, blocks and menus) into a logical package, so they can be prepared, previewed and published as a group. This is one of the most requested features and will be a valuable differentiator for Drupal. It looks pretty slick too:

An outside-in design that shows how content creators could work in different workspaces. When you're building out a new section on your site, you want to preview your entire site, and publish all the changes at once. Designed by Jozef Toth at Pfizer.

I'm impressed with the work the Workflow team has accomplished during the Drupal 8.4 cycle: the Workflow module became stable, the Content Moderation module improved by leaps and bounds, and the under-the-hood work has prepared us for content staging via Workspaces. In the process, we've also fixed some long-standing technical debt in the revisions and translations systems, laying the foundation for future improvements.

Special thanks to Angie Byron for contributions to this blog post and to Dick Olsson, Tim Millwood and Jozef Toth for their feedback during the writing process.

Categories: Drupal

International PHP Conference 2018 - spring edition

PHP News - Wed, 11/22/2017 - 00:54
Categories: PHP

AWS IoT Update – Better Value with New Pricing Model

AWS Blog - Tue, 11/21/2017 - 12:51

Our customers are using AWS IoT to make their connected devices more intelligent. These devices collect & measure data in the field (below the ground, in the air, in the water, on factory floors and in hospital rooms) and use AWS IoT as their gateway to the AWS Cloud. Once connected to the cloud, customers can write device data to Amazon Simple Storage Service (S3) and Amazon DynamoDB, process data using Amazon Kinesis and AWS Lambda functions, initiate Amazon Simple Notification Service (SNS) push notifications, and much more.

New Pricing Model (20-40% Reduction)
Today we are making a change to the AWS IoT pricing model that will make it an even better value for you. Most customers will see a price reduction of 20-40%, with some receiving a significantly larger discount depending on their workload.

The original model was based on a charge for the number of messages that were sent to or from the service. This all-inclusive model was a good starting point, but also meant that some customers were effectively paying for parts of AWS IoT that they did not actually use. For example, some customers have devices that ping AWS IoT very frequently, with sparse rule sets that fire infrequently. Our new model is more fine-grained, with independent charges for each component (all prices are for devices that connect to the US East (Northern Virginia) Region):

Connectivity – Metered in 1 minute increments and based on the total time your devices are connected to AWS IoT. Priced at $0.08 per million minutes of connection (equivalent to $0.042 per device per year for 24/7 connectivity). Your devices can send keep-alive pings at 30 second to 20 minute intervals at no additional cost.

Messaging – Metered by the number of messages transmitted between your devices and AWS IoT. Pricing starts at $1 per million messages, with volume pricing falling as low as $0.70 per million. You may send and receive messages up to 128 kilobytes in size. Messages are metered in 5 kilobyte increments (up from 512 bytes previously). For example, an 8 kilobyte message is metered as two messages.

Rules Engine – Metered for each time a rule is triggered, and for the number of actions executed within a rule, with a minimum of one action per rule. Priced at $0.15 per million rules-triggered and $0.15 per million actions-executed. Rules that process a message in excess of 5 kilobytes are metered at the next multiple of the 5 kilobyte size. For example, a rule that processes an 8 kilobyte message is metered as two rules.

Device Shadow & Registry Updates – Metered on the number of operations to access or modify Device Shadow or Registry data, priced at $1.25 per million operations. Device Shadow and Registry operations are metered in 1 kilobyte increments of the Device Shadow or Registry record size. For example, an update to a 1.5 kilobyte Shadow record is metered as two operations.

The AWS Free Tier now offers a generous allocation of connection minutes, messages, triggered rules, rules actions, Shadow, and Registry usage, enough to operate a fleet of up to 50 devices. The new prices will take effect on January 1, 2018 with no effort on your part. At that time, the updated prices will be published on the AWS IoT Pricing page.

AWS IoT at re:Invent
We have an entire IoT track at this year’s AWS re:Invent. Here is a sampling:

We also have customer-led sessions from Philips, Panasonic, Enel, and Salesforce.

Jeff;

Categories: Cloud

New – Interactive AWS Cost Explorer API

AWS Blog - Mon, 11/20/2017 - 16:16

We launched the AWS Cost Explorer a couple of years ago in order to allow you to track, allocate, and manage your AWS costs. The response to that launch, and to additions that we have made since then, has been very positive. However our customers are, as Jeff Bezos has said, “beautifully, wonderfully, dissatisfied.”

I see this first-hand every day. We launch something and that launch inspires our customers to ask for even more. For example, with many customers going all-in and moving large parts of their IT infrastructure to the AWS Cloud, we’ve had many requests for the raw data that feeds into the Cost Explorer. These customers want to programmatically explore their AWS costs, update ledgers and accounting systems with per-application and per-department costs, and to build high-level dashboards that summarize spending. Some of these customers have been going to the trouble of extracting the data from the charts and reports provided by Cost Explorer!

New Cost Explorer API
Today we are making the underlying data that feeds into Cost Explorer available programmatically. The new Cost Explorer API gives you a set of functions that allow you do everything that I described above. You can retrieve cost and usage data that is filtered and grouped across multiple dimensions (Service, Linked Account, tag, Availability Zone, and so forth), aggregated by day or by month. This gives you the power to start simple (total monthly costs) and to refine your requests to any desired level of detail (writes to DynamoDB tables that have been tagged as production) while getting responses in seconds.

Here are the operations:

GetCostAndUsage – Retrieve cost and usage metrics for a single account or all accounts (master accounts in an organization have access to all member accounts) with filtering and grouping.

GetDimensionValues – Retrieve available filter values for a specified filter over a specified period of time.

GetTags – Retrieve available tag keys and tag values over a specified period of time.

GetReservationUtilization – Retrieve EC2 Reserved Instance utilization over a specified period of time, with daily or monthly granularity plus filtering and grouping.

I believe that these functions, and the data that they return, will give you the ability to do some really interesting things that will give you better insights into your business. For example, you could tag the resources used to support individual marketing campaigns or development projects and then deep-dive into the costs to measure business value. You how have the potential to know, down to the penny, how much you spend on infrastructure for important events like Cyber Monday or Black Friday.

Things to Know
Here are a couple of things to keep in mind as you start to think about ways to make use of the API:

Grouping – The Cost Explorer web application provides you with one level of grouping; the APIs give you two. For example you could group costs or RI utilization by Service and then by Region.

Pagination – The functions can return very large amounts of data and follow the AWS-wide model for pagination by including a nextPageToken if additional data is available. You simply call the same function again, supplying the token, to move forward.

Regions – The service endpoint is in the US East (Northern Virginia) Region and returns usage data for all public AWS Regions.

Pricing – Each API call costs $0.01. To put this into perspective, let’s say you use this API to build a dashboard and it gets 1000 hits per month from your users. Your operating cost for the dashboard should be $10 or so; this is far less expensive than setting up your own systems to extract & ingest the data and respond to interactive queries.

The Cost Explorer API is available now and you can start using it today. To learn more, read about the Cost Explorer API.

Jeff;

Categories: Cloud

Amazon QuickSight Update – Geospatial Visualization, Private VPC Access, and More

AWS Blog - Mon, 11/20/2017 - 14:26

We don’t often recognize or celebrate anniversaries at AWS. With nearly 100 services on our list, we’d be eating cake and drinking champagne several times a week. While that might sound like fun, we’d rather spend our working hours listening to customers and innovating. With that said, Amazon QuickSight has now been generally available for a little over a year and I would like to give you a quick update!

QuickSight in Action
Today, tens of thousands of customers (from startups to enterprises, in industries as varied as transportation, legal, mining, and healthcare) are using QuickSight to analyze and report on their business data.

Here are a couple of examples:

Gemini provides legal evidence procurement for California attorneys who represent injured workers. They have gone from creating custom reports and running one-off queries to creating and sharing dynamic QuickSight dashboards with drill-downs and filtering. QuickSight is used to track sales pipeline, measure order throughput, and to locate bottlenecks in the order processing pipeline.

Jivochat provides a real-time messaging platform to connect visitors to website owners. QuickSight lets them create and share interactive dashboards while also providing access to the underlying datasets. This has allowed them to move beyond the sharing of static spreadsheets, ensuring that everyone is looking at the same data and is empowered to make timely decisions based on current data.

Transfix is a tech-powered freight marketplace that matches loads and increases visibility into logistics for Fortune 500 shippers in retail, food and beverage, manufacturing, and other industries. QuickSight has made analytics accessible to both BI engineers and non-technical business users. They scrutinize key business and operational metrics including shipping routes, carrier efficient, and process automation.

Looking Back / Looking Ahead
The feedback on QuickSight has been incredibly helpful. Customers tell us that their employees are using QuickSight to connect to their data, perform analytics, and make high-velocity, data-driven decisions, all without setting up or running their own BI infrastructure. We love all of the feedback that we get, and use it to drive our roadmap, leading to the introduction of over 40 new features in just a year. Here’s a summary:

Looking forward, we are watching an interesting trend develop within our customer base. As these customers take a close look at how they analyze and report on data, they are realizing that a serverless approach offers some tangible benefits. They use Amazon Simple Storage Service (S3) as a data lake and query it using a combination of QuickSight and Amazon Athena, giving them agility and flexibility without static infrastructure. They also make great use of QuickSight’s dashboards feature, monitoring business results and operational metrics, then sharing their insights with hundreds of users. You can read Building a Serverless Analytics Solution for Cleaner Cities and review Serverless Big Data Analytics using Amazon Athena and Amazon QuickSight if you are interested in this approach.

New Features and Enhancements
We’re still doing our best to listen and to learn, and to make sure that QuickSight continues to meet your needs. I’m happy to announce that we are making seven big additions today:

Geospatial Visualization – You can now create geospatial visuals on geographical data sets.

Private VPC Access – You can now sign up to access a preview of a new feature that allows you to securely connect to data within VPCs or on-premises, without the need for public endpoints.

Flat Table Support – In addition to pivot tables, you can now use flat tables for tabular reporting. To learn more, read about Using Tabular Reports.

Calculated SPICE Fields – You can now perform run-time calculations on SPICE data as part of your analysis. Read Adding a Calculated Field to an Analysis for more information.

Wide Table Support – You can now use tables with up to 1000 columns.

Other Buckets – You can summarize the long tail of high-cardinality data into buckets, as described in Working with Visual Types in Amazon QuickSight.

HIPAA Compliance – You can now run HIPAA-compliant workloads on QuickSight.

Geospatial Visualization
Everyone seems to want this feature! You can now take data that contains a geographic identifier (country, city, state, or zip code) and create beautiful visualizations with just a few clicks. QuickSight will geocode the identifier that you supply, and can also accept lat/long map coordinates. You can use this feature to visualize sales by state, map stores to shipping destinations, and so forth. Here’s a sample visualization:

To learn more about this feature, read Using Geospatial Charts (Maps), and Adding Geospatial Data.

Private VPC Access Preview
If you have data in AWS (perhaps in Amazon Redshift, Amazon Relational Database Service (RDS), or on EC2) or on-premises in Teradata or SQL Server on servers without public connectivity, this feature is for you. Private VPC Access for QuickSight uses an Elastic Network Interface (ENI) for secure, private communication with data sources in a VPC. It also allows you to use AWS Direct Connect to create a secure, private link with your on-premises resources. Here’s what it looks like:

If you are ready to join the preview, you can sign up today.

Jeff;

 

Categories: Cloud

php[tek] 2018 : Call for Speakers

PHP News - Mon, 11/20/2017 - 13:55
Categories: PHP

Amazon EC2 Update – X1e Instances in Five More Sizes and a Stronger SLA

AWS Blog - Thu, 11/16/2017 - 17:55

Earlier this year we launched the x1e.32xlarge instances in four AWS Regions with 4 TB of memory. Today, two months after that launch, customers are using these instances to run high-performance relational and NoSQL databases, in-memory databases, and other enterprise applications that are able to take advantage of large amounts of memory.

Five More Sizes of X1e
I am happy to announce that we are extending the memory-optimized X1e family with five additional instance sizes. Here’s the lineup:

Model vCPUs Memory (GiB) SSD Storage (GB) Networking Performance x1e.xlarge 4 122 120 Up to 10 Gbps
x1e.2xlarge 8 244 240 Up to 10 Gbps x1e.4xlarge 16 488 480 Up to 10 Gbps x1e.8xlarge 32 976 960 Up to 10 Gbps x1e.16xlarge 64 1,952 1,920 10 Gbps x1e.32xlarge 128 3,904 3,840 25 Gbps

The instances are powered by quad socket Intel® Xeon® E7 8880 processors running at 2.3 GHz, with large L3 caches and plenty of memory bandwidth. ENA networking and EBS optimization are standard, with up to 14 Gbps of dedicated throughput (depending on instance size) to EBS.

As part of today’s launch we are also making all sizes of X1e available in the Asia Pacific (Sydney) Region. This means that you can now launch them in On-Demand and Reserved Instance form in the US East (Northern Virginia), US West (Oregon), EU (Ireland), Asia Pacific (Tokyo), and Asia Pacific (Sydney) Regions.

Stronger EC2 SLA
I also have another piece of good news!

Effective immediately, we are increasing the EC2 Service Level Agreement (SLA) for both EC2 and EBS to 99.99%, for all regions and for all AWS customers. This change was made possible by our continuous investment in infrastructure and quality of service, along with our focus on operational excellence.

Jeff;

Categories: Cloud

New – AWS OpsWorks for Puppet Enterprise

AWS Blog - Thu, 11/16/2017 - 10:44

At last year’s AWS re:Invent we launched AWS OpsWorks for Chef Automate which enabled customers to get their own Chef Automate server, managed by AWS. Building on customer feedback we’re excited to bring Puppet Enterprise to OpsWorks today.

Puppet Enterprise allows you to automate provisioning, configuring, and managing instances through a puppet-agent deployed on each managed node. You can define a configuration once and apply it to thousands of nodes with automatic rollback and drift detection. AWS OpsWorks for Puppet Enterprise eliminates the need to maintain your own Puppet masters while working seamlessly with your existing Puppet manifests.

OpsWorks for Puppet Enterprise will manage the Puppet master server for you and take care of operational tasks like installation, upgrades, and backups. It also simplifies node registration and offers a useful starter kit for bootstrapping your nodes. More details below.

Creating a Managed Puppet Master

Creating a Puppet master in OpsWorks is simple. First navigate to the OpsWorks console Puppet section and click “Create Puppet Enterprise Server”.

On this first part of the setup you configure the region and EC2 instance type for your Puppet master. A c4.large can support up to 450 nodes while a c4.2xlarge can support 1600+ nodes. Your Puppet Enterprise server will be provisioned with the newest version of Amazon Linux (2017.09) and the most current version of Puppet Enterprise (2017.3.2).

On the next screen of the setup you can optionally configure an SSH key to connect your Puppet master. This is useful if you’ll be making any major customizations but it’s a good general practice to interact with Puppet through the client tools rather than directly on the instance itself.

Also on this page, you can setup an r10k repo to pull dynamic configurations.

In the advanced settings page you can select the usual deployment options around VPCs, security groups, IAM roles, and instance profiles. If you choose to have OpsWorks create the instance security group for you, do note that it will be open by default so it’s important to restrict access to this later.

Two components to pay attention to on this page are the maintenance window and backup configurations. When new minor versions of Puppet software become available, system maintenance is designed to update the minor version of Puppet Enterprise on your Puppet master automatically, as soon as it passes AWS testing. AWS performs extensive testing to verify that Puppet upgrades are production-ready and will deploy without disrupting existing customer environments. Automated backups allow you to store durable backups of your Puppet master in S3 and to restore from those backups at anytime. You can adjust the backup frequency and retention based on your business needs.

Using AWS OpsWorks for Puppet Enterprise

While your Puppet master is provisioning there are two helpful information boxes provided in the console.

You can download your sign-in credentials as well as sample userdata for installing the puppet-agent onto your Windows and Linux nodes. An important note here is that you’re able to manage your on-premises nodes as well, provided they have connectivity to your Puppet master.

Once your Puppet master is fully provisioned you can access the Puppet Enterprise http console and use Puppet as you normally would.

Useful Details

AWS OpsWorks for Puppet Enterprise is priced in Node Hours for your managed nodes. Prices start at $0.017 per-node-hour and decrease with volume of nodes – you can see the full pricing page here. You’ll also pay for the underlying resources required to run your Puppet master. At launch AWS OpsWorks for Puppet Enterprise is available in US East (N. Virginia) Region, US West (Oregon) Region, and EU (Ireland) Region. Of course everything you’ve seen in the console can also be accomplished through the AWS SDKs and CLI. You can get more information in the Getting Started Guide.

Randall

Categories: Cloud

Longhorn PHP 2018

PHP News - Wed, 11/15/2017 - 20:42
Categories: PHP

An update on the Layout Initiative for Drupal 8.4/8.5

Drupal News - Wed, 11/15/2017 - 08:39

This blog has been re-posted with permission from Dries Buytaert's blog. Please leave your comments on the original post.

Now Drupal 8.4 is released, and Drupal 8.5 development is underway, it is a good time to give an update on what is happening with Drupal's Layout Initiative.

8.4: Stable versions of layout functionality

Traditionally, site builders have used one of two layout solutions in Drupal: Panelizer and Panels. Both are contributed modules outside of Drupal core, and both achieved stable releases in the middle of 2017. Given the popularity of these modules, having stable releases closed a major functionality gap that prevented people from building sites with Drupal 8.

8.4: A Layout API in core

The Layout Discovery module added in Drupal 8.3 core has now been marked stable. This module adds a Layout API to core. Both the aforementioned Panelizer and Panels modules have already adopted the new Layout API with their 8.4 release. A unified Layout API in core eliminates fragmentation and encourages collaboration.

8.5+: A Layout Builder in core

Today, Drupal's layout management solutions exist as contributed modules. Because creating and building layouts is expected to be out-of-the-box functionality, we're working towards adding layout building capabilities to Drupal core.

Using the Layout Builder, you start by selecting predefined layouts for different sections of the page, and then populate those layouts with one or more blocks. I showed the Layout Builder in my DrupalCon Vienna keynote and it was really well received:

8.5+: Use the new Layout Builder UI for the Field Layout module

One of the nice improvements that went in Drupal 8.3 was the Field Layout module, which provides the ability to apply pre-defined layouts to what we call "entity displays". Instead of applying layouts to individual pages, you can apply layouts to types of content regardless of what page they are displayed on. For example, you can create a content type 'Recipe' and visually lay out the different fields that make up a recipe. Because the layout is associated with the recipe rather than with a specific page, recipes will be laid out consistently across your website regardless of what page they are shown on.

The basic functionality is already included in Drupal core as part of the experimental Fields Layout module. The goal for Drupal 8.5 is to stabilize the Fields Layout module, and to improve its user experience by using the new Layout Builder. Eventually, designing the layout for a recipe could look like this:

Layouts remains a strategic priority for Drupal 8 as it was the second most important site builder priority identified in my 2016 State of Drupal survey, right behind Migrations. I'm excited to see the work already accomplished by the Layout team, and look forward to seeing their progress in Drupal 8.5! If you want to help, check out the Layout Initiative roadmap.

Special thanks to Angie Byron for contributions to this blog post, to Tim Plunkett and Kris Vanderwater for their feedback during the writing process, and to Emilie Nouveau for the screenshot and video contributions.

Categories: Drupal

An update on the Media Initiative for Drupal 8.4/8.5

Drupal News - Fri, 11/10/2017 - 07:49

This blog has been re-posted with permission from Dries Buytaert's blog. Please leave your comments on the original post.

In my blog post, "A plan for media management in Drupal 8", I talked about some of the challenges with media in Drupal, the hopes of end users of Drupal, and the plan that the team working on the Media Initiative was targeting for future versions of Drupal 8. That blog post is one year old today. Since that time we released both Drupal 8.3 and Drupal 8.4, and Drupal 8.5 development is in full swing. In other words, it's time for an update on this initiative's progress and next steps.

8.4: a Media API in core

Drupal 8.4 introduced a new Media API to core. For site builders, this means that Drupal 8.4 ships with the new Media module (albeit still hidden from the UI, pending necessary user experience improvements), which is an adaptation of the contributed Media Entity module. The new Media module provides a "base media entity". Having a "base media entity" means that all media assets — local images, PDF documents, YouTube videos, tweets, and so on — are revisable, extendable (fieldable), translatable and much more. It allows all media to be treated in a common way, regardless of where the media resource itself is stored. For end users, this translates into a more cohesive content authoring experience; you can use consistent tools for managing images, videos, and other media rather than different interfaces for each media type.

8.4+: porting contributed modules to the new Media API

The contributed Media Entity module was a "foundational module" used by a large number of other contributed modules. It enables Drupal to integrate with Pinterest, Vimeo, Instagram, Twitter and much more. The next step is for all of these modules to adopt the new Media module in core. The required changes are laid out in the API change record, and typically only require a couple of hours to complete. The sooner these modules are updated, the sooner Drupal's rich media ecosystem can start benefitting from the new API in Drupal core. This is a great opportunity for intermediate contributors to pitch in.

8.5+: add support for remote video in core

As proof of the power of the new Media API, the team is hoping to bring in support for remote video using the oEmbed format. This allows content authors to easily add e.g. YouTube videos to their posts. This has been a long-standing gap in Drupal's out-of-the-box media and asset handling, and would be a nice win.

8.6+: a Media Library in core

The top two requested features for the content creator persona are richer image and media integration and digital asset management.

The results of the State of Drupal 2016 survey show the importance of the Media Initiative for content authors.

With a Media Library content authors can select pre-existing media from a library and easily embed it in their posts. Having a Media Library in core would be very impactful for content authors as it helps with both these feature requests.

During the 8.4 development cycle, a lot of great work was done to prototype the Media Library discussed in my previous Media Initiative blog post. I was able to show that progress in my DrupalCon Vienna keynote:

The Media Library work uses the new Media API in core. Now that the new Media API landed in Drupal 8.4 we can start focusing more on the Media Library. Due to bandwidth constraints, we don't think the Media Library will be ready in time for the Drupal 8.5 release. If you want to help contribute time or funding to the development of the Media Library, have a look at the roadmap of the Media Initiative or let me know and I'll get you in touch with the team behind the Media Initiative.

Special thanks to Angie Byron for contributions to this blog post and to Janez Urevc, Sean Blommaert, Marcos Cano Miranda, Adam G-H and Gábor Hojtsy for their feedback during the writing process.

Categories: Drupal

Say Hello To Our Newest AWS Community Heroes (Fall 2017 Edition)

AWS Blog - Thu, 11/09/2017 - 12:12

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these heroes share their time and knowledge across social media and through in-person events. Heroes also actively help drive community-led tracks at conferences. At this year’s re:Invent, many Heroes will be speaking during the Monday Community Day track.

This November, we are thrilled to have four Heroes joining our network of cloud innovators. Without further ado, meet to our newest AWS Community Heroes!

 

Anh Ho Viet

Anh Ho Viet is the founder of AWS Vietnam User Group, Co-founder & CEO of OSAM, an AWS Consulting Partner in Vietnam, an AWS Certified Solutions Architect, and a cloud lover.

At OSAM, Anh and his enthusiastic team have helped many companies, from SMBs to Enterprises, move to the cloud with AWS. They offer a wide range of services, including migration, consultation, architecture, and solution design on AWS. Anh’s vision for OSAM is beyond a cloud service provider; the company will take part in building a complete AWS ecosystem in Vietnam, where other companies are encouraged to become AWS partners through training and collaboration activities.

In 2016, Anh founded the AWS Vietnam User Group as a channel to share knowledge and hands-on experience among cloud practitioners. Since then, the community has reached more than 4,800 members and is still expanding. The group holds monthly meetups, connects many SMEs to AWS experts, and provides real-time, free-of-charge consultancy to startups. In August 2017, Anh joined as lead content creator of a program called “Cloud Computing Lectures for Universities” which includes translating AWS documentation & news into Vietnamese, providing students with fundamental, up-to-date knowledge of AWS cloud computing, and supporting students’ career paths.

 

Thorsten Höger

Thorsten Höger is CEO and Cloud consultant at Taimos, where he is advising customers on how to use AWS. Being a developer, he focuses on improving development processes and automating everything to build efficient deployment pipelines for customers of all sizes.

Before being self-employed, Thorsten worked as a developer and CTO of Germany’s first private bank running on AWS. With his colleagues, he migrated the core banking system to the AWS platform in 2013. Since then he organizes the AWS user group in Stuttgart and is a frequent speaker at Meetups, BarCamps, and other community events.

As a supporter of open source software, Thorsten is maintaining or contributing to several projects on Github, like test frameworks for AWS Lambda, Amazon Alexa, or developer tools for CloudFormation. He is also the maintainer of the Jenkins AWS Pipeline plugin.

In his spare time, he enjoys indoor climbing and cooking.

 

Becky Zhang

Yu Zhang (Becky Zhang) is COO of BootDev, which focuses on Big Data solutions on AWS and high concurrency web architecture. Before she helped run BootDev, she was working at Yubis IT Solutions as an operations manager.

Becky plays a key role in the AWS User Group Shanghai (AWSUGSH), regularly organizing AWS UG events including AWS Tech Meetups and happy hours, gathering AWS talent together to communicate the latest technology and AWS services. As a female in technology industry, Becky is keen on promoting Women in Tech and encourages more woman to get involved in the community.

Becky also connects the China AWS User Group with user groups in other regions, including Korea, Japan, and Thailand. She was invited as a panelist at AWS re:Invent 2016 and spoke at the Seoul AWS Summit this April to introduce AWS User Group Shanghai and communicate with other AWS User Groups around the world.

Besides events, Becky also promotes the Shanghai AWS User Group by posting AWS-related tech articles, event forecasts, and event reports to Weibo, Twitter, Meetup.com, and WeChat (which now has over 2000 official account followers).

 

Nilesh Vaghela

Nilesh Vaghela is the founder of ElectroMech Corporation, an AWS Cloud and open source focused company (the company started as an open source motto). Nilesh has been very active in the Linux community since 1998. He started working with AWS Cloud technologies in 2013 and in 2014 he trained a dedicated cloud team and started full support of AWS cloud services as an AWS Standard Consulting Partner. He always works to establish and encourage cloud and open source communities.

He started the AWS Meetup community in Ahmedabad in 2014 and as of now 12 Meetups have been conducted, focusing on various AWS technologies. The Meetup has quickly grown to include over 2000 members. Nilesh also created a Facebook group for AWS enthusiasts in Ahmedabad, with over 1500 members.

Apart from the AWS Meetup, Nilesh has delivered a number of seminars, workshops, and talks around AWS introduction and awareness, at various organizations, as well as at colleges and universities. He has also been active in working with startups, presenting AWS services overviews and discussing how startups can benefit the most from using AWS services.

Nilesh is Red Hat Linux Technologies and AWS Cloud Technologies trainer as well.

 

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.

Categories: Cloud

Amazon ElastiCache Update – Online Resizing for Redis Clusters

AWS Blog - Thu, 11/09/2017 - 11:44

Amazon ElastiCache makes it easy to for you to set up a fast, in-memory data store and cache. With support for the two most popular open source offerings (Redis and Memcached), ElastiCache supports the demanding needs of game leaderboards, in-memory analytics, and large-scale messaging.

Today I would like to tell you about an important addition to Amazon ElastiCache for Redis. You can already create clusters with up to 15 shards, each responsible for storing keys and values for a specific set of slots (each cluster has exactly 16,384 slots). A single cluster can expand to store 3.55 terabytes of in-memory data while supporting up to 20 million reads and 4.5 million writes per second.

Now with Online Resizing
You can now adjust the number of shards in a running ElastiCache for Redis cluster while the cluster remains online and responding to requests. This gives you the power to respond to changes in traffic and data volume without having to take the cluster offline or to start with an empty cache. You can also rebalance a running cluster to uniformly redistribute slot space without changing the number of shards.

When you initiate a resharding or rebalancing operation, ElastiCache for Redis starts by preparing a plan that will result in an even distribution of slots across the shards in the cluster. Then it transfers slots across shards, moving many in parallel for efficiency. This all happens while the cluster continues to respond to requests, with a modest impact on write throughput for writes to a slot that is in motion. The migration rate is dependent on the instance type, network speed, read/write traffic to the slots, and is generally about 1 gigabyte per minute.

The resharding and rebalancing operations apply to Redis clusters that were created with Cluster Mode enabled:

Resharding a Cluster
In general, you will know that it is time to expand a cluster via resharding when it starts to face significant memory pressure or when individual nodes are becoming bottlenecks. You can watch the cluster’s CloudWatch metrics to identify each situation:

Memory Pressure – FreeableMemory, SwapUsage, BytesUsedForCache.

CPU Bottleneck – CPUUtilization, CurrConnections, NewConnections.

Network Bottleneck – NetworkBytesIn, NetworkBytesOut.

You can use CloudWatch Dashboards to monitor these metrics, and CloudWatch Alarms to automate the resharding process.

To reshard a Redis cluster from the ElastiCache Dashboard, click on the cluster to visit the detail page, and then click on the Add shards button:

Enter the number of shards to add and (optionally) the desired Availability Zones, then click on Add:

The status of the cluster will change to modifying and the resharding process will begin. It can take anywhere from a few minutes to several hours, as indicated above. You can track the progress on the detail page for the cluster:

You can see the slots moving from shard to shard:

You can also watch the Events for the cluster:

During the resharding you should avoid the use of the KEYS and SMEMBERS commands, as well as compute-intensive Lua scripts in order to moderate the load on the cluster shards. You should avoid the FLUSHDB and FLUSHALL commands entirely; using them will interrupt and then abort the resharding process.

The status of each shard will return to available when the process is complete:

The same process takes place when you delete shards.

Rebalancing Slots
You can perform this operation by heading to the cluster’s detail page and clicking on Rebalance Slot Distribution:

Things to Know
Here are a couple of things to keep in mind about this new feature:

Engine Version – Your cluster must be running version 3.2.10 of the Redis engine.

Migration Size – Slots that contain items that are larger than 256 megabytes after serialization are not migrated.

Cluster Endpoint – The cluster endpoint does not change as a result of a resharding or rebalancing.

Available Now
This feature is available now and you can start using it today.

Jeff;

 

Categories: Cloud

PHP 7.2.0RC6 Released

PHP News - Thu, 11/09/2017 - 05:57
Categories: PHP

New – AWS PrivateLink for AWS Services: Kinesis, Service Catalog, EC2 Systems Manager, Amazon EC2 APIs, and ELB APIs in your VPC

AWS Blog - Wed, 11/08/2017 - 12:07

This guest post is by Colm MacCárthaigh, Senior Engineer for Amazon Virtual Private Cloud.

Since VPC Endpoints launched in 2015, creating Endpoints has been a popular way to securely access S3 and DynamoDB from an Amazon Virtual Private Cloud (VPC) without the need for an Internet gateway, a NAT gateway, or firewall proxies. With VPC Endpoints, the routing between the VPC and the AWS service is handled by the AWS network, and IAM policies can be used to control access to service resources.

Today we are announcing AWS PrivateLink, the newest generation of VPC Endpoints which is designed for customers to access AWS services in a highly available and scalable manner, while keeping all the traffic within the AWS network. Kinesis, Service Catalog, Amazon EC2, EC2 Systems Manager (SSM), and Elastic Load Balancing (ELB) APIs are now available to use inside your VPC, with support for more services coming soon such as Key Management Service (KMS) and Amazon Cloudwatch.

With traditional endpoints, it’s very much like connecting a virtual cable between your VPC and the AWS service. Connectivity to the AWS service does not require an Internet or NAT gateway, but the endpoint remains outside of your VPC. With PrivateLink, endpoints are instead created directly inside of your VPC, using Elastic Network Interfaces (ENIs) and IP addresses in your VPC’s subnets. The service is now in your VPC, enabling connectivity to AWS services via private IP addresses. That means that VPC Security Groups can be used to manage access to the endpoints and that PrivateLink endpoints can also be accessed from your premises via AWS Direct Connect.

Using the services powered by PrivateLink, customers can now manage fleets of instances, create and manage catalogs of IT services as well as store and process data, without requiring the traffic to traverse the Internet.

Creating a PrivateLink Endpoint
To create a PrivateLink endpoint, I navigate to the VPC Console, select Endpoints, and choose Create Endpoint.

I then choose which service I’d like to access. New PrivateLink endpoints have an “interface” type. In this case I’d like to use the Kinesis service directly from my VPC and I choose the kinesis-streams service.

At this point I can choose which of my VPCs I’d like to launch my new endpoint in, and select the subnets that the ENIs and IP addresses will be placed in. I can also associate the endpoint with a new or existing Security Group, allowing me to control which of my instances can access the Endpoint.

Because PrivateLink endpoints will use IP addresses from my VPC, I have the option to over-ride DNS for the AWS service DNS name by using VPC Private DNS. By leaving Enable Private DNS Name checked, lookups from within my VPC for “kinesis.us-east-1.amazonaws.com” will resolve to the IP addresses for the endpoint that I’m creating. This makes the transition to the endpoint seamless without requiring any changes to my applications. If I’d prefer to test or configure the endpoint before handling traffic by default, I can leave this disabled and then change it at any time by editing the endpoint.

Once I’m ready and happy with the VPC, subnets and DNS settings, I click Create Endpoint to complete the process.

Using a PrivateLink Endpoint

By default, with the Private DNS Name enabled, using a PrivateLink endpoint is as straight-forward as using the SDK, AWS CLI or other software that accesses the service API from within your VPC. There’s no need to change any code or configurations.

To support testing and advanced configurations, every endpoint also gets a set of DNS names that are unique and dedicated to your endpoint. There’s a primary name for the endpoint and zonal names.

The primary name is particularly useful for accessing your endpoint via Direct Connect, without having to use any DNS over-rides on-premises. Naturally, the primary name can also be used inside of your VPC.
The primary name, and the main service name – since I chose to over-ride it – include zonal fault-tolerance and will balance traffic between the Availability Zones. If I had an architecture that uses zonal isolation techniques, either for fault containment and compartmentalization, low latency, or for minimizing regional data transfer I could also use the zonal names to explicitly control whether my traffic flows between or stays within zones.

Pricing & Availability
AWS PrivateLink is available today in all AWS commercial regions except China (Beijing). For the region availability of individual services, please check our documentation.

Pricing starts at $0.01 / hour plus a data processing charge at $0.01 / GB. Data transferred between availability zones, or between your Endpoint and your premises via Direct Connect will also incur the usual EC2 Regional and Direct Connect data transfer charges. For more information, see VPC Pricing.

Colm MacCárthaigh

 

Categories: Cloud

Pages

Subscribe to LAMP, Database and Cloud Technical Information aggregator


Main menu 2

by Dr. Radut