Jump to Navigation

Feed aggregator

New AWS Auto Scaling – Unified Scaling For Your Cloud Applications

AWS Blog - Tue, 01/16/2018 - 17:50

I’ve been talking about scalability for servers and other cloud resources for a very long time! Back in 2006, I wrote “This is the new world of scalable, on-demand web services. Pay for what you need and use, and not a byte more.” Shortly after we launched Amazon Elastic Compute Cloud (EC2), we made it easy for you to do this with the simultaneous launch of Elastic Load Balancing, EC2 Auto Scaling, and Amazon CloudWatch. Since then we have added Auto Scaling to other AWS services including ECS, Spot Fleets, DynamoDB, Aurora, AppStream 2.0, and EMR. We have also added features such as target tracking to make it easier for you to scale based on the metric that is most appropriate for your application.

Introducing AWS Auto Scaling
Today we are making it easier for you to use the Auto Scaling features of multiple AWS services from a single user interface with the introduction of AWS Auto Scaling. This new service unifies and builds on our existing, service-specific, scaling features. It operates on any desired EC2 Auto Scaling groups, EC2 Spot Fleets, ECS tasks, DynamoDB tables, DynamoDB Global Secondary Indexes, and Aurora Replicas that are part of your application, as described by an AWS CloudFormation stack or in AWS Elastic Beanstalk (we’re also exploring some other ways to flag a set of resources as an application for use with AWS Auto Scaling).

You no longer need to set up alarms and scaling actions for each resource and each service. Instead, you simply point AWS Auto Scaling at your application and select the services and resources of interest. Then you select the desired scaling option for each one, and AWS Auto Scaling will do the rest, helping you to discover the scalable resources and then creating a scaling plan that addresses the resources of interest.

If you have tried to use any of our Auto Scaling options in the past, you undoubtedly understand the trade-offs involved in choosing scaling thresholds. AWS Auto Scaling gives you a variety of scaling options: You can optimize for availability, keeping plenty of resources in reserve in order to meet sudden spikes in demand. You can optimize for costs, running close to the line and accepting the possibility that you will tax your resources if that spike arrives. Alternatively, you can aim for the middle, with a generous but not excessive level of spare capacity. In addition to optimizing for availability, cost, or a blend of both, you can also set a custom scaling threshold. In each case, AWS Auto Scaling will create scaling policies on your behalf, including appropriate upper and lower bounds for each resource.

AWS Auto Scaling in Action
I will use AWS Auto Scaling on a simple CloudFormation stack consisting of an Auto Scaling group of EC2 instances and a pair of DynamoDB tables. I start by removing the existing Scaling Policies from my Auto Scaling group:

Then I open up the new Auto Scaling Console and selecting the stack:

Behind the scenes, Elastic Beanstalk applications are always launched via a CloudFormation stack. In the screen shot above, awseb-e-sdwttqizbp-stack is an Elastic Beanstalk application that I launched.

I can click on any stack to learn more about it before proceeding:

I select the desired stack and click on Next to proceed. Then I enter a name for my scaling plan and choose the resources that I’d like it to include:

I choose the scaling strategy for each type of resource:

After I have selected the desired strategies, I click Next to proceed. Then I review the proposed scaling plan, and click Create scaling plan to move ahead:

The scaling plan is created and in effect within a few minutes:

I can click on the plan to learn more:

I can also inspect each scaling policy:

I tested my new policy by applying a load to the initial EC2 instance, and watched the scale out activity take place:

I also took a look at the CloudWatch metrics for the EC2 Auto Scaling group:

Available Now
We are launching AWS Auto Scaling today in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Singapore) Regions today, with more to follow. There’s no charge for AWS Auto Scaling; you pay only for the CloudWatch Alarms that it creates and any AWS resources that you consume.

As is often the case with our new services, this is just the first step on what we hope to be a long and interesting journey! We have a long roadmap, and we’ll be adding new features and options throughout 2018 in response to your feedback.

Jeff;

Categories: Cloud

Now Open – Third AWS Availability Zone in London

AWS Blog - Mon, 01/15/2018 - 16:00

We expand AWS by picking a geographic area (which we call a Region) and then building multiple, isolated Availability Zones in that area. Each Availability Zone (AZ) has multiple Internet connections and power connections to multiple grids.

Today I am happy to announce that we are opening our 50th AWS Availability Zone, with the addition of a third AZ to the EU (London) Region. This will give you additional flexibility to architect highly scalable, fault-tolerant applications that run across multiple AZs in the UK.

Since launching the EU (London) Region, we have seen an ever-growing set of customers, particularly in the public sector and in regulated industries, use AWS for new and innovative applications. Here are a couple of examples, courtesy of my AWS colleagues in the UK:

Enterprise – Some of the UK’s most respected enterprises are using AWS to transform their businesses, including BBC, BT, Deloitte, and Travis Perkins. Travis Perkins is one of the largest suppliers of building materials in the UK and is implementing the biggest systems and business change in its history, including an all-in migration of its data centers to AWS.

Startups – Cross-border payments company Currencycloud has migrated its entire payments production, and demo platform to AWS resulting in a 30% saving on their infrastructure costs. Clearscore, with plans to disrupting the credit score industry, has also chosen to host their entire platform on AWS. UnderwriteMe is using the EU (London) Region to offer an underwriting platform to their customers as a managed service.

Public Sector -The Met Office chose AWS to support the Met Office Weather App, available for iPhone and Android phones. Since the Met Office Weather App went live in January 2016, it has attracted more than half a million users. Using AWS, the Met Office has been able to increase agility, speed, and scalability while reducing costs. The Driver and Vehicle Licensing Agency (DVLA) is using the EU (London) Region for services such as the Strategic Card Payments platform, which helps the agency achieve PCI DSS compliance.

The AWS EU (London) Region has achieved Public Services Network (PSN) assurance, which provides UK Public Sector customers with an assured infrastructure on which to build UK Public Sector services. In conjunction with AWS’s Standardized Architecture for UK-OFFICIAL, PSN assurance enables UK Public Sector organizations to move their UK-OFFICIAL classified data to the EU (London) Region in a controlled and risk-managed manner.

For a complete list of AWS Regions and Services, visit the AWS Global Infrastructure page. As always, pricing for services in the Region can be found on the detail pages; visit our Cloud Products page to get started.

Jeff;

Categories: Cloud

Happy seventeenth birthday Drupal

Drupal News - Mon, 01/15/2018 - 14:30

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

Seventeen years ago today, I open-sourced the software behind Drop.org and released Drupal 1.0.0. When Drupal was first founded, Google was in its infancy, the mobile web didn't exist, and JavaScript was a very unpopular word among developers.

Over the course of the past seventeen years, I've witnessed the nature of the web change and countless internet trends come and go. As we celebrate Drupal's birthday, I'm proud to say it's one of the few content management systems that has stayed relevant for this long.

While the course of my career has evolved, Drupal has always remained a constant. It's what inspires me every day, and the impact that Drupal continues to make energizes me. Millions of people around the globe depend on Drupal to deliver their business, mission and purpose. Looking at the Drupal users in the video below gives me goosebumps.

Drupal's success is not only marked by the organizations it supports, but also by our community that makes the project more than just the software. While there were hurdles in 2017, there were plenty of milestones, too:

  • At least 190,000 sites running Drupal 8, up from 105,000 sites in January 2016 (80% year over year growth)
  • 1,597 stable modules for Drupal 8, up from 810 in January 2016 (95% year over year growth)
  • 4,941 DrupalCon attendees in 2017
  • 41 DrupalCamps held in 16 different countries in the world
  • 7,240 individual code contributors, a 28% increase compared to 2016
  • 889 organizations that contributed code, a 26% increase compared to 2016
  • 13+ million visitors to Drupal.org in 2017
  • 76,374 instance hours for running automated tests (the equivalent of almost 9 years of continuous testing in one year)

Since Drupal 1.0.0 was released, our community's ability to challenge the status quo, embrace evolution and remain resilient has never faltered. 2018 will be a big year for Drupal as we will continue to tackle important initiatives that not only improve Drupal's ease of use and maintenance, but also to propel Drupal into new markets. No matter the challenge, I'm confident that the spirit and passion of our community will continue to grow Drupal for many birthdays to come.

Tonight, we're going to celebrate Drupal's birthday with a warm skillet chocolate chip cookie topped with vanilla ice cream. Drupal loves chocolate! ;-)

Note: The video was created by Acquia, but it is freely available for anyone to use when selling or promoting Drupal.

Categories: Drupal

How to decouple Drupal in 2018

Drupal News - Fri, 01/12/2018 - 10:19

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

In this post, I'm providing some guidance on how and when to decouple Drupal.

Almost two years ago, I had written a blog post called "How should you decouple Drupal?". Many people have found the flowchart in that post to be useful in their decision-making on how to approach their Drupal architectures. Since that point, Drupal, its community, and the surrounding market have evolved, and the original flowchart needs a big update.

Drupal's API-first initiative has introduced new capabilities, and we've seen the advent of the Waterwheel ecosystem and API-first distributions like Reservoir, Headless Lightning, and Contenta. More developers both inside and outside the Drupal community are experimenting with Node.js and adopting fully decoupled architectures.

Let's start with the new flowchart in full:

All the ways to decouple Drupal

The traditional approach to Drupal architecture, also referred to as coupled Drupal, is a monolithic implementation where Drupal maintains control over all front-end and back-end concerns. This is Drupal as we've known it — ideal for traditional websites. If you're a content creator, keeping Drupal in its coupled form is the optimal approach, especially if you want to achieve a fast time to market without as much reliance on front-end developers. But traditional Drupal 8 also remains a great approach for developers who love Drupal 8 and want it to own the entire stack.

A second approach, progressively decoupled Drupal, offers an approach that strikes a balance between editorial needs like layout management and developer desires to use more JavaScript, by interpolating a JavaScript framework into the Drupal front end. Progressive decoupling is in fact a spectrum, whether it is Drupal only rendering the page's shell and populating initial data — or JavaScript only controlling explicitly delineated sections of the page. Progressively decoupled Drupal hasn't taken the world by storm, likely because it's a mixture of both JavaScript and PHP and doesn't take advantage of server-side rendering via Node.js. Nonetheless, it's an attractive approach because it makes more compromises and offers features important to both editors and developers.

Last but not least, fully decoupled Drupal has gained more attention in recent years as the growth of JavaScript continues with no signs of slowing down. This involves a complete separation of concerns between the structure of your content and its presentation. In short, it's like treating your web experience as just another application that needs to be served content. Even though it results in a loss of some out-of-the-box CMS functionality such as in-place editing or content preview, it's been popular because of the freedom and control it offers front-end developers.

What do you intend to build?

The most important question to ask is what you are trying to build.

  1. If your plan is to create a single standalone website or web application, decoupling Drupal may or may not be the right choice based on the must-have features your developers and editors are asking for.
  2. If your plan is to create multiple experiences (including web, native mobile, IoT, etc.), you can use Drupal to provide web service APIs that serve content to other experiences, either as (a) a content repository with no public-facing component or (b) a traditional website that is also a content repository at the same time.

Ultimately, your needs will determine the usefulness of decoupled Drupal for your use case. There is no technical reason to decouple if you're building a standalone website that needs editorial capabilities, but that doesn't mean people don't prefer to decouple because of their preference for JavaScript over PHP. Nonetheless, you need to pay close attention to the needs of your editors and ensure you aren't removing crucial features by using a decoupled approach. By the same token, you can't avoid decoupling Drupal if you're using it as a content repository for IoT or native applications. The next part of the flowchart will help you weigh those trade-offs.

Today, Drupal makes it much easier to build applications consuming decoupled Drupal. Even if you're using Drupal as a content repository to serve content to other applications, well-understood specifications like JSON API, GraphQL, OpenAPI, and CouchDB significantly lower its learning curve and open the door to tooling ecosystems provided by the communities who wrote those standards. In addition, there are now API-first distributions optimized to serve as content repositories and SDKs like Waterwheel.js that help developers "speak" Drupal.

Are there things you can't live without?

Perhaps most critical to any decision to decouple Drupal is the must-have feature set desired for both editors and developers. In order to determine whether you should use a decoupled Drupal, it's important to isolate which features are most valuable for your editors and developers. Unfortunately, there is are no black-and-white answers here; every project will have to weigh the different pros and cons.

For example, many marketing teams choose a CMS because they want to create landing pages, and a CMS gives them the ability to lay out content on a page, quickly reorganize a page and more. The ability to do all this without the aid of a developer can make or break a CMS in marketers' eyes. Similarly, many digital marketers value the option to edit content in the context of its preview and to do so across various workflow states. These kind of features typically get lost in a fully decoupled setting where Drupal does not exert control over the front end.

On the other hand, the need for control over the visual presentation of content can hinder developers who want to craft nuanced interactions or build user experiences in a particular way. Moreover, developer teams often want to use the latest and greatest technologies, and JavaScript is no exception. Nowadays, more JavaScript developers are including modern techniques, like server-side rendering and ES6 transpilation, in their toolboxes, and this is something decision-makers should take into account as well.

How you reconcile this tension between developers' needs and editors' requirements will dictate which approach you choose. For teams that have an entirely editorial focus and lack developer resources — or whose needs are focused on the ability to edit, place, and preview content in context — decoupling Drupal will remove all of the critical linkages within Drupal that allow editors to make such visual changes. But for teams with developers itching to have more flexibility and who don't need to cater to editors or marketers, fully decoupled Drupal can be freeing and allow developers to explore new paradigms in the industry — with the caveat that many of those features that editors value are now unavailable.

What will the future hold?

In the future, and in light of the rapid evolution of decoupled Drupal, my hope is that Drupal keeps shrinking the gap between developers and editors. After all, this was the original goal of the CMS in the first place: to help content authors write and assemble their own websites. Drupal's history has always been a balancing act between editorial needs and developers' needs, even as the number of experiences driven by Drupal grows.

I believe the next big hurdle is how to begin enabling marketers to administer all of the other channels appearing now and in the future with as much ease as they manage websites in Drupal today. In an ideal future, a content creator can build a content model once, preview content on every channel, and use familiar tools to edit and place content, regardless of whether the channel in question is mobile, chatbots, digital signs, or even augmented reality.

Today, developers are beginning to use Drupal not just as a content repository for their various applications but also as a means to create custom editorial interfaces. It's my hope that we'll see more experimentation around conceiving new editorial interfaces that help give content creators the control they need over a growing number of channels. At that point, I'm sure we'll need another new flowchart.

Conclusion

Thankfully, Drupal is in the right place at the right time. We've anticipated the new world of decoupled CMS architectures with web services in Drupal 8 and older contributed modules. More recently, API-first distributions, SDKs, and even reference applications in Ember and React are giving developers who have never heard of Drupal the tools to interact with it in unprecedented ways.

Unlike many other content management systems, old and new, Drupal provides a spectrum of architectural possibilities tuned to the diverse needs of different organizations. This flexibility between fully decoupling Drupal, progressively decoupling it, and traditional Drupal — in addition to each solution's proven robustness in the wild — gives teams the ability to make an educated decision about the best approach for them. This optionality sets Drupal apart from new headless content management systems and most SaaS platforms, and it also shows Drupal's maturity as a decoupled CMS over WordPress. In other words, it doesn't matter what the team looks like or what the project's requirements are; Drupal has the answer.

Special thanks to Preston So for contributions to this blog post and to Alex Bronstein, Angie Byron, Gabe Sullice, Samuel Mortenson, Ted Bowman and Wim Leers for their feedback during the writing process.

Categories: Drupal

AWS IoT, Greengrass, and Machine Learning for Connected Vehicles at CES

AWS Blog - Wed, 01/10/2018 - 12:12

Last week I attended a talk given by Bryan Mistele, president of Seattle-based INRIX. Bryan’s talk provided a glimpse into the future of transportation, centering around four principle attributes, often abbreviated as ACES:

Autonomous – Cars and trucks are gaining the ability to scan and to make sense of their environments and to navigate without human input.

Connected – Vehicles of all types have the ability to take advantage of bidirectional connections (either full-time or intermittent) to other cars and to cloud-based resources. They can upload road and performance data, communicate with each other to run in packs, and take advantage of traffic and weather data.

Electric – Continued development of battery and motor technology, will make electrics vehicles more convenient, cost-effective, and environmentally friendly.

Shared – Ride-sharing services will change usage from an ownership model to an as-a-service model (sound familiar?).

Individually and in combination, these emerging attributes mean that the cars and trucks we will see and use in the decade to come will be markedly different than those of the past.

On the Road with AWS
AWS customers are already using our AWS IoT, edge computing, Amazon Machine Learning, and Alexa products to bring this future to life – vehicle manufacturers, their tier 1 suppliers, and AutoTech startups all use AWS for their ACES initiatives. AWS Greengrass is playing an important role here, attracting design wins and helping our customers to add processing power and machine learning inferencing at the edge.

AWS customer Aptiv (formerly Delphi) talked about their Automated Mobility on Demand (AMoD) smart vehicle architecture in a AWS re:Invent session. Aptiv’s AMoD platform uses Greengrass and microservices to drive the onboard user experience, along with edge processing, monitoring, and control. Here’s an overview:

Another customer, Denso of Japan (one of the world’s largest suppliers of auto components and software) is using Greengrass and AWS IoT to support their vision of Mobility as a Service (MaaS). Here’s a video:

AWS at CES
The AWS team will be out in force at CES in Las Vegas and would love to talk to you. They’ll be running demos that show how AWS can help to bring innovation and personalization to connected and autonomous vehicles.

Personalized In-Vehicle Experience – This demo shows how AWS AI and Machine Learning can be used to create a highly personalized and branded in-vehicle experience. It makes use of Amazon Lex, Polly, and Amazon Rekognition, but the design is flexible and can be used with other services as well. The demo encompasses driver registration, login and startup (including facial recognition), voice assistance for contextual guidance, personalized e-commerce, and vehicle control. Here’s the architecture for the voice assistance:

Connected Vehicle Solution – This demo shows how a connected vehicle can combine local and cloud intelligence, using edge computing and machine learning at the edge. It handles intermittent connections and uses AWS DeepLens to train a model that responds to distracted drivers. Here’s the overall architecture, as described in our Connected Vehicle Solution:

Digital Content Delivery – This demo will show how a customer uses a web-based 3D configurator to build and personalize their vehicle. It will also show high resolution (4K) 3D image and an optional immersive AR/VR experience, both designed for use within a dealership.

Autonomous Driving – This demo will showcase the AWS services that can be used to build autonomous vehicles. There’s a 1/16th scale model vehicle powered and driven by Greengrass and an overview of a new AWS Autonomous Toolkit. As part of the demo, attendees drive the car, training a model via Amazon SageMaker for subsequent on-board inferencing, powered by Greengrass ML Inferencing.

To speak to one of my colleagues or to set up a time to see the demos, check out the Visit AWS at CES 2018 page.

Some Resources
If you are interested in this topic and want to learn more, the AWS for Automotive page is a great starting point, with discussions on connected vehicles & mobility, autonomous vehicle development, and digital customer engagement.

When you are ready to start building a connected vehicle, the AWS Connected Vehicle Solution contains a reference architecture that combines local computing, sophisticated event rules, and cloud-based data processing and storage. You can use this solution to accelerate your own connected vehicle projects.

Jeff;

Categories: Cloud

Drupal 8 Content Migration: A Guide For Marketers

Drupal News - Tue, 01/09/2018 - 10:28

The following blog was written by Drupal Association Premium Supporting Partner, Phase2.

If you’re a marketer considering a move from Drupal 7 to Drupal 8, it’s important to understand the implications of content migration. You’ve worked hard to create a stable of content that speaks to your audience and achieves business goals, and it’s crucial that the migration of all this content does not disrupt your site’s user experience or alienate your visitors.

Content migrations are, in all honesty, fickle, challenging, and labor-intensive. The code that’s produced for migration is used once and discarded; the documentation to support them is generally never seen again after they’re done. So what’s the value in doing it at all?

YOUR DATA IS IMPORTANT (ESPECIALLY FOR SEO!) 

No matter what platform you’re working to migrate, your data is important. You’ve invested lots of time, money, and effort into producing content that speaks to your organization’s business needs.

Migrating your content smoothly and efficiently is crucial for your site’s SEO ranking. If you fail to migrate highly trafficked content or to ensure that existing links direct readers to your content’s new home you will see visitor numbers plummet. Once you fall behind in SEO, it’s difficult to climb back up to a top spot, so taking content migration seriously from the get go is vital for your business’ visibility.

Also, if you work in healthcare or government, some or all of your content may be legally mandated to be both publically available, and letter-for-letter accurate. You may also have to go through lengthy (read: expensive) legal reviews for every word of content on your sites to ensure compliance with an assortment of legal standards – HIPPA, Section 508 and WCAG accessibility, copyright and patent review, and more.  

Some industries also mandate access to content and services for people with Limited English Proficiency, which usually involves an additional level of editorial content review (See https://www.lep.gov/ for resources).  

At media organizations, it’s pretty simple – their content is their business!

In short, your content is an business investment – one that should be leveraged.

SO WHERE DO I START WITH A DRUPAL 8 MIGRATION?

Like with anything, you start at the beginning. In this case that’s choosing the right digital technology partner to help you with your migration. Here’s a handy guide to help you choose the right vendor and start your relationship off on the right foot.

Once you choose your digital partner content migration should start at the very beginning of the engagement. Content migration is one of the building blocks of a good platform transition. It’s not something that can be left for later – trust us on this one. It’s complicated, takes a lot of developer hours, and typically affects your both content strategy and your design.

Done properly, the planning stages begin in the discovery phase of the project with your technology vendor, and work on migration usually continues well into the development phase, with an additional last-sprint push to get all the latest content moved over.

While there are lots of factors to consider, they boil down to two questions: What content are we migrating, and how are we doing it?

WHICH CONTENT TO MIGRATE

You may want to transition all of your content, but this is an area that does bear some discussion. We usually recommend a thorough content audit before embarking on any migration adventure. You can learn more about website content audits here. Since most migration happens at a code & database level, it’s possible to filter by virtually any facet of the content you like. The most common in our experience are date of creation, type of content, and categorization.

While it might be tempting to cut off your site’s content to the most recent few articles, Chris Anderson’s 2004 Wired article, “The Long Tail” (https://www.wired.com/2004/10/tail/) observes that a number of business models make good use of old, infrequently used content. The value of the Long Tail to your business is most certainly something that’s worth considering.

Obviously, the type of content to be migrated is pretty important as well. Most content management systems differentiate between different ‘content types’, each with their own uses and value. A good thorough analysis of the content model, and the uses to which each of these types has been and will be used, is invaluable here. There are actually two reasons for that. First, the analysis can be used to determine what content will be migrated, and how. Later, this analysis serves as the basis of the creation of those ‘content types’ in the destination site.

A typical analysis takes place in a spreadsheet (yay, spreadsheets!). Our planning sheet has multiple tabs but the critical one in the early stages is Content Types.

Here you see some key fields: Count, Migration, and Field Mapping Status.

Count is the number of items of each content type. This is often used to determine if it’s more trouble than it’s worth to do an automated content migration, as opposed to a simple cut & paste job. As a very general guideline, if there are more than 50 items of content in a content type, then that content should probably be migrated with automation. Of course, the amount of fields in a content type can sway that as well. Once this determination is made, that info is stored in the Migration field.

The Field Mapping Status Column is a status column for the use of developers, and reflects the current efforts to create the new content types, with all their fields.  It’s a summary of the Content Type Specific tabs in the spreadsheet. More detail on this is below.

Ultimately, the question of what content to migrate is a business question that should be answered in close consultation with your stakeholders.  Like all such conversations, this will be most productive if your decisions are made based on hard data.

HOW DO WE DO IT?

This is, of course, an enormous question. Once you’ve decided what content you are going to migrate, you begin by taking stock of the content types you are dealing with. That’s where the next tabs in the spreadsheet come in.

The first one you should tackle is the Global Field Mappings. Most content management systems define a set of default fields that are attached to all content types. In Drupal, for example, this includes title, created, updated, status, and body. Rather than waste effort documenting these on every content type, document them once and, through the magic of spreadsheet functions, print them out on the Content Type tabs.

Generally, you want to note Name, Machine Name, Field Type, and any additional Requirements or Notes on implementation on these spreadsheets.

It’s worth noting here that there are decisions to be made about what fields to migrate, just as you made decisions about what content types. Some data will simply be irrelevant or redundant in the new system, and may safely be ignored.

In addition to content types, you also want to document any supporting data – most likely users and any categorization or taxonomy. For a smooth migration, you usually want to actually start the development with them.

The last step we’ll cover in this post is content type creation. Having analyzed the structure of the data in the old system, it’s time to begin to recreate that structure in the new platform. For Drupal, this means creating new content type bundles, and making choices about the field types. New platforms, or new versions of platforms, often bring changes to field types, and some content will have to be adapted into new containers along the way. We’ll cover all that in a later post.

Now, many systems have the ability to migrate content types, in addition to content. Personally, I recommend against using this capability. Unless your content model is extremely simple, the changes to a content type’s fields are usually pretty significant. You’re better off putting in some labor up front than trying to clean up a computer’s mess later.

In our next post, we’ll address the foundations of Drupal content migrations – Migration Groups, and Taxonomy and User Migrations. Stay tuned!

Written by Joshua Turton

https://www.phase2technology.com/blog/drupal-8-content-migration-guide-marketers

Categories: Drupal

Dutch PHP Conference 2018 – Call for Papers

PHP News - Tue, 01/09/2018 - 05:35
This year marks the 12th edition of the Dutch PHP Conference, once again hosted in the beautiful city of Amsterdam. Our tutorial day will be Thursday, June 7th, with the main 2-day conference following on the 8th and 9th 2018. Speakers,
Categories: PHP

Dutch PHP Conference 2018

PHP News - Tue, 01/09/2018 - 05:30
Categories: PHP

AWS Online Tech Talks – January 2018

AWS Blog - Mon, 01/08/2018 - 11:31

Happy New Year! Kick of 2018 right by expanding your AWS knowledge with a great batch of new Tech Talks. We’re covering some of the biggest launches from re:Invent including Amazon Neptune, Amazon Rekognition Video, AWS Fargate, AWS Cloud9, Amazon Kinesis Video Streams, AWS PrivateLink, AWS Single-Sign On and more!

January 2018– Schedule

Noted below are the upcoming scheduled live, online technical sessions being held during the month of January. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.

Webinars featured this month are:

Monday January 22

Analytics & Big Data
11:00 AM – 11:45 AM PT Analyze your Data Lake, Fast @ Any Scale  Lvl 300

Database
01:00 PM – 01:45 PM PT Deep Dive on Amazon Neptune Lvl 200

Tuesday, January 23

Artificial Intelligence
9:00 AM – 09:45 AM PT  How to get the most out of Amazon Rekognition Video, a deep learning based video analysis service Lvl 300

Containers

11:00 AM – 11:45 AM Introducing AWS Fargate Lvl 200

Serverless
01:00 PM – 02:00 PM PT Overview of Serverless Application Deployment Patterns Lvl 400

Wednesday, January 24

DevOps
09:00 AM – 09:45 AM PT Introducing AWS Cloud9  Lvl 200

Analytics & Big Data
11:00 AM – 11:45 AM PT Deep Dive: Amazon Kinesis Video Streams
Lvl 300
Database
01:00 PM – 01:45 PM PT Introducing Amazon Aurora with PostgreSQL Compatibility Lvl 200

Thursday, January 25

Artificial Intelligence
09:00 AM – 09:45 AM PT Introducing Amazon SageMaker Lvl 200

Mobile
11:00 AM – 11:45 AM PT Ionic and React Hybrid Web/Native Mobile Applications with Mobile Hub Lvl 200

IoT
01:00 PM – 01:45 PM PT Connected Product Development: Secure Cloud & Local Connectivity for Microcontroller-based Devices Lvl 200

Monday, January 29

Enterprise
11:00 AM – 11:45 AM PT Enterprise Solutions Best Practices 100 Achieving Business Value with AWS Lvl 100

Compute
01:00 PM – 01:45 PM PT Introduction to Amazon Lightsail Lvl 200

Tuesday, January 30

Security, Identity & Compliance
09:00 AM – 09:45 AM PT Introducing Managed Rules for AWS WAF Lvl 200

Storage
11:00 AM – 11:45 AM PT  Improving Backup & DR – AWS Storage Gateway Lvl 300

Compute
01:00 PM – 01:45 PM PT  Introducing the New Simplified Access Model for EC2 Spot Instances Lvl 200

Wednesday, January 31

Networking
09:00 AM – 09:45 AM PT  Deep Dive on AWS PrivateLink Lvl 300

Enterprise
11:00 AM – 11:45 AM PT Preparing Your Team for a Cloud Transformation Lvl 200

Compute
01:00 PM – 01:45 PM PT  The Nitro Project: Next-Generation EC2 Infrastructure Lvl 300

Thursday, February 1

Security, Identity & Compliance
09:00 AM – 09:45 AM PT  Deep Dive on AWS Single Sign-On Lvl 300

Storage
11:00 AM – 11:45 AM PT How to Build a Data Lake in Amazon S3 & Amazon Glacier Lvl 300

Categories: Cloud

PHP 5.6.33 Released

PHP News - Thu, 01/04/2018 - 12:21
Categories: PHP

PHP 7.1.13 Released

PHP News - Thu, 01/04/2018 - 07:27
Categories: PHP

PHP 7.2.1 Released

PHP News - Thu, 01/04/2018 - 07:26
Categories: PHP

Bulgaria PHP Conference 2016

PHP News - Thu, 01/04/2018 - 06:34
Categories: PHP

PHP 7.0.27 Released

PHP News - Thu, 01/04/2018 - 06:00
Categories: PHP

PHP 7.0.27 Released

PHP News - Thu, 01/04/2018 - 06:00
Categories: PHP

AWS Direct Connect Update – Ten New Locations Added in Late 2017

AWS Blog - Tue, 01/02/2018 - 10:34

Happy 2018! I am looking forward to getting back to my usual routine, working with our teams to learn about their upcoming launches and then writing blog posts to bring the news to you. Right now I am still catching up on a few launches and announcements from late 2017.

First on the list for today is our most recent round of new cities for AWS Direct Connect. AWS customers all over the world use Direct Connect to create dedicated network connections from their premises to AWS in order to reduce their network costs, increase throughput, and to pursue a more consistent network experience.

We added ten new locations to our Direct Connect roster in December, all of which offer both 1 Gbps and 10 Gbps connectivity, along with partner-supplied options for speeds below 1 Gbps. Here are the newest locations, along withe the data centers and associated AWS Regions:

  • Bangalore, India – NetMagic DC2 – Asia Pacific (Mumbai).
  • Cape Town, South Africa – Teraco Ct1 – EU (Ireland).
  • Johannesburg, South Africa – Teraco JB1 – EU (Ireland).
  • London, UK – Telehouse North Two – EU (London).
  • Miami, Florida, US – Equinix MI1 – US East (Northern Virginia).
  • Minneapolis, Minnesota, US – Cologix MIN3 – US East (Ohio)
  • Ningxia, China – Shapotou IDC – China (Ningxia).
  • Ningxia, China – Industrial Park IDC – China (Ningxia).
  • Rio de Janeiro, Brazil – Equinix RJ2– South America (São Paulo).
  • Tokyo, Japan – AT Tokyo Chuo – Asia Pacific (Tokyo).

You can use these new locations in conjunction with the AWS Direct Connect Gateway to set up connectivity that spans Virtual Private Clouds (VPCs) spread across multiple AWS Regions (this does not apply to the AWS Regions in China).

If you are interested in putting Direct Connect to use, be sure to check out our ever-growing list of Direct Connect Partners.

Jeff;

Categories: Cloud

PHP Serbia Conference 2018

PHP News - Sat, 12/23/2017 - 12:54
Categories: PHP

AWS Training & Certification Update – Free Digital Training + Certified Cloud Practitioner Exam

AWS Blog - Thu, 12/21/2017 - 10:44

We recently made some updates to AWS Training and Certification to make it easier for you to build your cloud skills and to learn about many of the new services that we launched at AWS re:Invent.

Free AWS Digital Training
You can now find over 100 new digital training classes at aws.training, all with unlimited access at no charge.

The courses were built by AWS experts and allow you to learn AWS at your own pace, helping you to build foundational knowledge for dozens of AWS services and solutions. You can also access some more advanced training on Machine Learning and Storage.

Here are some of the new digital training topics:

You can browse through the available topics, enroll in one that interests you, watch it, and track your progress by looking at your transcript:

AWS Certified Cloud Practitioner
Our newest certification exam, AWS Certified Cloud Practitioner, lets you validate your overall understanding of the AWS Cloud with an industry-recognized credential. It covers four domains: cloud concepts, security, technology, and billing and pricing. We recommend that you have at least six months of experience (or equivalent training) with the AWS Cloud in any role, including technical, managerial, sales, purchasing, or financial.

To help you prepare for this exam, take our new AWS Cloud Practitioner Essentials course , one of the new AWS digital training courses. This course will give you an overview of cloud concepts, AWS services, security, architecture, pricing, and support. In addition to helping you validate your overall understanding of the AWS Cloud, AWS Certified Cloud Practitioner also serves as a new prerequisite option for the Big Data Specialty and Advanced Networking Specialty certification exams.

Go For It!
I’d like to encourage you to check out aws.training and to enroll in our free digital training in order to learn more about AWS and our newest services. You can strengthen your skills, add to your knowledge base, and set a goal of earning your AWS Certified Cloud Practitioner certification in the new year.

Jeff;

Categories: Cloud

Amazon Linux 2 – Modern, Stable, and Enterprise-Friendly

AWS Blog - Tue, 12/19/2017 - 14:45

I’m getting ready to wrap up my work for the year, cleaning up my inbox and catching up on a few recent AWS launches that happened at and shortly after AWS re:Invent.

Last week we launched Amazon Linux 2. This is modern version of Linux, designed to meet the security, stability, and productivity needs of enterprise environments while giving you timely access to new tools and features. It also includes all of the things that made the Amazon Linux AMI popular, including AWS integration, cloud-init, a secure default configuration, regular security updates, and AWS Support. From that base, we have added many new features including:

Long-Term Support – You can use Amazon Linux 2 in situations where you want to stick with a single major version of Linux for an extended period of time, perhaps to avoid re-qualifying your applications too frequently. This build (2017.12) is a candidate for LTS status; the final determination will be made based on feedback in the Amazon Linux Discussion Forum. Long-term support for the Amazon Linux 2 LTS build will include security updates, bug fixes, user-space Application Binary Interface (ABI), and user-space Application Programming Interface (API) compatibility for 5 years.

Extras Library – You can now get fast access to fresh, new functionality while keeping your base OS image stable and lightweight. The Amazon Linux Extras Library eliminates the age-old tradeoff between OS stability and access to fresh software. It contains open source databases, languages, and more, each packaged together with any needed dependencies.

Tuned Kernel – You have access to the latest 4.9 LTS kernel, with support for the latest EC2 features and tuned to run efficiently in AWS and other virtualized environments.

Systemd – Amazon Linux 2 includes the systemd init system, designed to provide better boot performance and increased control over individual services and groups of interdependent services. For example, you can indicate that Service B must be started only after Service A is fully started, or that Service C should start on a change in network connection status.

Wide Availabilty – Amazon Linux 2 is available in all AWS Regions in AMI and Docker image form. Virtual machine images for Hyper-V, KVM, VirtualBox, and VMware are also available. You can build and test your applications on your laptop or in your own data center and then deploy them to AWS.

Launching an Instance
You can launch an instance in all of the usual ways – AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, RunInstances, and via a AWS CloudFormation template. I’ll use the Console:

I’m interested in the Extras Library; here’s how I see which topics (lists of packages) are available:

As you can see, the library includes languages, editors, and web tools that receive frequent updates. Each topic contains all of dependencies that are needed to install the package on Amazon Linux 2. For example, the Rust topic includes the cmake build system for Rust, cargo for Rust package maintenance, and the LLVM-based compiler toolchain for Rust.

Here’s how I install a topic (Emacs 25.3):

SNS Updates
Many AWS customers use the Amazon Linux AMIs as a starting point for their own AMIs. If you do this and would like to kick off your build process whenever a new AMI is released, you can subscribe to an SNS topic:

You can be notified by email, invoke a AWS Lambda function, and so forth.

Available Now
Amazon Linux 2 is available now and you can start using it in the cloud and on-premises today! To learn more, read the Amazon Linux 2 LTS Candidate (2017.12) Release Notes.

Jeff;

 

Categories: Cloud

Now Open AWS EU (Paris) Region

AWS Blog - Mon, 12/18/2017 - 20:45

Today we are launching our 18th AWS Region, our fourth in Europe. Located in the Paris area, AWS customers can use this Region to better serve customers in and around France.

The Details
The new EU (Paris) Region provides a broad suite of AWS services including Amazon API Gateway, Amazon Aurora, Amazon CloudFront, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, Amazon DynamoDB, Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon ECS, Amazon Elastic Block Store (EBS), Amazon EMR, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon Glacier, Amazon Kinesis Streams, Polly, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon Virtual Private Cloud, Auto Scaling, AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, AWS Elastic Beanstalk, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS), AWS Lambda, AWS Marketplace, AWS OpsWorks Stacks, AWS Personal Health Dashboard, AWS Server Migration Service, AWS Service Catalog, AWS Shield Standard, AWS Snowball, AWS Snowball Edge, AWS Snowmobile, AWS Storage Gateway, AWS Support (including AWS Trusted Advisor), Elastic Load Balancing, and VM Import.

The Paris Region supports all sizes of C5, M5, R4, T2, D2, I3, and X1 instances.

There are also four edge locations for Amazon Route 53 and Amazon CloudFront: three in Paris and one in Marseille, all with AWS WAF and AWS Shield. Check out the AWS Global Infrastructure page to learn more about current and future AWS Regions.

The Paris Region will benefit from three AWS Direct Connect locations. Telehouse Voltaire is available today. AWS Direct Connect will also become available at Equinix Paris in early 2018, followed by Interxion Paris.

All AWS infrastructure regions around the world are designed, built, and regularly audited to meet the most rigorous compliance standards and to provide high levels of security for all AWS customers. These include ISO 27001, ISO 27017, ISO 27018, SOC 1 (Formerly SAS 70), SOC 2 and SOC 3 Security & Availability, PCI DSS Level 1, and many more. This means customers benefit from all the best practices of AWS policies, architecture, and operational processes built to satisfy the needs of even the most security sensitive customers.

AWS is certified under the EU-US Privacy Shield, and the AWS Data Processing Addendum (DPA) is GDPR-ready and available now to all AWS customers to help them prepare for May 25, 2018 when the GDPR becomes enforceable. The current AWS DPA, as well as the AWS GDPR DPA, allows customers to transfer personal data to countries outside the European Economic Area (EEA) in compliance with European Union (EU) data protection laws. AWS also adheres to the Cloud Infrastructure Service Providers in Europe (CISPE) Code of Conduct. The CISPE Code of Conduct helps customers ensure that AWS is using appropriate data protection standards to protect their data, consistent with the GDPR. In addition, AWS offers a wide range of services and features to help customers meet the requirements of the GDPR, including services for access controls, monitoring, logging, and encryption.

From Our Customers
Many AWS customers are preparing to use this new Region. Here’s a small sample:

Societe Generale, one of the largest banks in France and the world, has accelerated their digital transformation while working with AWS. They developed SG Research, an application that makes reports from Societe Generale’s analysts available to corporate customers in order to improve the decision-making process for investments. The new AWS Region will reduce latency between applications running in the cloud and in their French data centers.

SNCF is the national railway company of France. Their mobile app, powered by AWS, delivers real-time traffic information to 14 million riders. Extreme weather, traffic events, holidays, and engineering works can cause usage to peak at hundreds of thousands of users per second. They are planning to use machine learning and big data to add predictive features to the app.

Radio France, the French public radio broadcaster, offers seven national networks, and uses AWS to accelerate its innovation and stay competitive.

Les Restos du Coeur, a French charity that provides assistance to the needy, delivering food packages and participating in their social and economic integration back into French society. Les Restos du Coeur is using AWS for its CRM system to track the assistance given to each of their beneficiaries and the impact this is having on their lives.

AlloResto by JustEat (a leader in the French FoodTech industry), is using AWS to to scale during traffic peaks and to accelerate their innovation process.

AWS Consulting and Technology Partners
We are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in France. Here’s a partial list:

AWS Premier Consulting PartnersAccenture, Capgemini, Claranet, CloudReach, DXC, and Edifixio.

AWS Consulting PartnersABC Systemes, Atos International SAS, CoreExpert, Cycloid, Devoteam, LINKBYNET, Oxalide, Ozones, Scaleo Information Systems, and Sopra Steria.

AWS Technology PartnersAxway, Commerce Guys, MicroStrategy, Sage, Software AG, Splunk, Tibco, and Zerolight.

AWS in France
We have been investing in Europe, with a focus on France, for the last 11 years. We have also been developing documentation and training programs to help our customers to improve their skills and to accelerate their journey to the AWS Cloud.

As part of our commitment to AWS customers in France, we plan to train more than 25,000 people in the coming years, helping them develop highly sought after cloud skills. They will have access to AWS training resources in France via AWS Academy, AWSome days, AWS Educate, and webinars, all delivered in French by AWS Technical Trainers and AWS Certified Trainers.

Use it Today
The EU (Paris) Region is open for business now and you can start using it today!

Jeff;

 

Categories: Cloud

Pages

Subscribe to LAMP, Database and Cloud Technical Information aggregator


Main menu 2

by Dr. Radut