Enter Behemoth: what AWS re:Invent means to pure-plays
Top 10 AWS re:Invent announcements and how they might impact pure-plays
As a start-up founder there is always that slightly sick moment when you uncover a potential competitor in your domain or niche, and the more specialised your business is the more susceptible you are to this.
Even more so when that competitor is a global giant trumpeting their service to the world on a marketing platform you couldn’t dream to afford; it’s a commonentrepreneur’slament.
The brutal truth almost everyone in software needs to understand is that you are not a beautiful or unique snowflake and it’s almost inevitable your idea has been thought of and probably tried many times before, often by someone bigger than you, better capitalised than you and, just maybe, perish the thought, even better than you.
A commonly held epithet of entrepreneurship is that ideas are worthless and execution is everything. Although the start-up scene can be an echo chamber of sorts there is undoubtedly more than a grain of truth to this considering the successes and failures of different companies with ostensibly the same strategy and business plan.
With that in mind then, in the SaaS world, Amazon’s annual AWS re:Invent announcements must be a cause for both butterflies (in the stomach of any entrepreneur excited to smell a business opportunity) and the sick feeling in the pit of the same organ (for those who run services in a space where the leviathan of Amazon has decided to plant its feet).
While Amazon certainly innovate, for many of these announcements it has to be said AWS services are not unique snowflakes, so who will these announcements affect?
AWS services are not unique snowflakes, so who will these announcements affect?
What does it change?
I thought it would be interesting then to have a brief look at the most interesting ten new services (or changes to existing services) announced at Amazon’s reinvent conference with a few comments on the competitive landscape that some of AWS’s releases may be terraforming in the near future. Here my focus is on those companies that are either pure-plays or at least those whose existence seems primarily driven by solutions in the spaces where Amazon is muscling in.
I find the situations (and potential reactions) of these companies interesting as, being a start-up founder myself, these potentially existential threats seem perpetually around the corner in the cloud SaaS market.
In most cases there are, of course, other offerings from the more obvious enterprise competitors like Microsoft and IBM, but these are less interesting case studies with their well-diversified software and service portfolios, whereas some of the companies mentioned below may either thrive or die from changes in the market Amazon bring.
Probably the biggest announcement of the event, QuickSight is a business intelligence service that seems to directly attack cloud offerings of companies like BIME, Birst and GoodData, while vendors with on-premise centric markets like Qlik may not have quite the same threat level.
The included AutoGraph tool supposedly learns the best types of graph to show the analytical patterns of data over time, which potentially puts it also against more visual analytics-focussed companies like Tableau and Chartio.
The Elasticsearch service is an interesting move by Amazon, and this announcement may help centre more of the log analytics market around Elasticsearch as the standard which may not be good for proprietary competitors like Splunk. At the same time, the recently renamed Elastic company that make Elasticsearch, freshly injected with large amounts of capital to better commercialise the offering, must also feel some threat to their own ‘Elasticsearch as a Service’ platform, Found.
Open-source competitors like Graylog still offer a more complete stack than Elasticsearch alone, but it is hard to see how Amazon would not see packaging their services together (for example, Elasticsearch, Kinesis Firehose, and QuickSight with AutoGraph) in a way that directly challenges these existing composite offerings in the market.
This service finds security and compliance vulnerabilities for applications in AWS and is now in preview mode on AWS. It’s comparatively light on details on how it achieves these security inspections, however the marketing literature seems to suggest it may do full stack testing, both black-box testing of the application and analysis of the server configuration. Presumably then, this lines up to compete with the likes of Trustwave’s suite of products, Burp Suite and Netsparker.
Continuing on the theme of web application security, the AWS WAF allows an integrated, customisable firewall to be built around your AWS applications. This will compete with the offerings of products like CrowdStrike, SiteLock, Sucuri, Imperva’s products SecureSphere and Incapsula, and services offered as part of Akamai (acquired from Prolexic) and CloudFlare.
Many of you may be asking what this can do that Apache, Nginx or IIS with the open-source ModSecurity couldn’t do. It’s a question I have myself and presume one that AWS will answer over time, but the integration of services and dashboards seems to be a recurring theme here with Amazon finding a space in pricing between existing vendors and free, open-source solutions as they did with the Aurora database. It will be interesting to see whether these particular types of lock-in services will get traction.
A platform to build, run, test and monitor mobile apps in the cloud, this service incorporates the AWS Device Farm created in the past year from the ashes of AppThwack and attempts to offer a one-stop shop for development of mobile applications.
It will be interesting to see how this competes against the likes of Parse, Kii, Xamarin, Kony, Appery and Appcelerator, and more specialist Mobile Backend as a Service (MBaaS) providers like Kinvey and AnyPresence as the mobile development community has been more entrenched in the PaaS for a longer time and Amazon may have trouble prising them out of the existing ecosystems.
Bringing the Internet of Things to the cloud was an expected move for Amazon, and this presumably follows directly from their acquisition of 2lemetry earlier in 2015 and works to their general strategy in increasing their presence in the IoT space, together with advancing Kinesis for high-throughput streaming and their presence in the physical consumer devices like with the Dash Button.
Amazon IoT seems to be more of a pattern for utilising existing services in pre-fabricated ways for the purposes of massive scale IoT management, however the likes of Carriots, SeeControl, ThingSpeak and Waylay may see a big threat in the natural eco-system AWS is making available.
Now this is a departure from the norm for Amazon: a physical hardware device with up to 50TB of storage that can be transported back to AWS for upload into the cloud for “petabyte-scale” uploads.
I’m not aware of any pure-plays in this space but at least Iron Mountain and Prime Focus Technologies have been offering this for Google Cloud Platform, while Aspera (an IBM company) has been touting for a few years their ability to move data onto AWS at ‘maximum speed’ which will presumably be trounced in performance by Snowball, if not convenience, at larger sizes.
Amazon continues its march towards greater support for flexible application topologies and the industry trend towards containerisation and microservices with changes to the EC2 services and an EC2 Container Registry to go with the EC2 Container Services that have been around for a little while now.
The classic EC2 service has had instance extensions with both a bigger, badder version (the x1) and smaller version (the t2.nano) appealing to different ends of the market. While the biggest server instance is unlikely to make much difference to many competitors except to provide more competition to the likes of IBM, HP and Oracle hardware offerings, the creation of the new nano class of instances with burstable behaviour may lower the bar even further for how much it costs to run applications in the cloud. Dedicated Hosts and more flexible spot plans also give yet more options that erode some of the barriers to migration that we have heard from organisations regarding AWS.
Combined with this, the EC2 Container Services for running Docker containers and the EC2 Container Registry for storing them allow AWS to capitalise on this trend that threatened to usurp the established dominance of the virtual machine as the portable atomic unit of choice for the cloud. The increasing dominance of Docker means these services may end up affecting solutions like Docker’s own subscription offerings for registries and other third-parties, like Springs.
There’s an interesting aside here, as with the Elastic story, regarding the challenges when making software open-source if a behemoth like Amazon takes a shine to it. Time will tell whether Docker and Elastic may become a victim of their own success by providing an open market ready for other enterprises to exploit at scale.
While not a platform in its own right, the EC2 services with the container extensions begin to threaten even more established application PaaS like Heroku, Engine Yard and AppFog. The new smaller instances and lightweight containers may also add competition to the players at the bottom-end of the processing spectrum like the ARM/Raspberry Pi cloud services of Scaleway. However for the moment these announcements simply seem to take away one of many headaches that these companies provide solutions for, and while unlikely to seduce any of their customers perhaps give a little additional incentive to already AWS-centric users to stay in-house.
Yawn, another DB instance type, I hear you say. Well, while not the most ground-breaking revelation, it’s a good development for those in the open-source community as validation of the fork from Oracle’s MySQL. While I’m not aware of any service devoted solely to providing MariaDB-as-a-Service, since the fork with MySQL following Oracle’s acquisition of Sun, MariaDB has been a flag-bearer of the open-source database movement alongside Postgres and this will another thorn in Oracle’s side. While not a huge surprise in itself, it is encouraging even in a post-Aurora world that this has been announced.
Broadening the market for ease of use for open-source technology will certainly be welcomed in some quarters, including here at TestZoo since we traversed down the MySQL route for some of our backend primarily due to MariaDB not being available. Now, if only there were some easy way of migrating into MariaDB…
Well, that was well-timed! The Data Migration Service aids transition to cloud databases from any form, and includes Schema Conversion Tools to supposedly speed up migration setup to no more than 15 minutes. However the devil may lie in the detail, as it always does with migration and integration projects, and Amazon Executive Andy Jassy alluded to the real-world constraints here when he said “We think we’ll be able to address 80% of changes automatically”. The Pareto principle is likely at play here and suggests that remaining 20% of changes are what anyway make up 80% of the cost.
“We think we’ll be able to address 80% of changes automatically”
While I’m a little sceptical of how successful this would be for anything beyond trivial conversions, anything that potentially allows migrations from expensive legacy RDBMS vendors to cheaper cloud alternatives is likely to be welcomed with open arms.
That said, due to that pesky ‘law of the vital few’ I don’t think this will have a major impact on businesses focussing on cloud-migration expertise whether through consulting services like Appirio and SaaS offerings like Racemi’s Cloud Path and RiverMeadow, especially as one would assume their platform agnostic services would continue to appeal to those in the need of packaged migration expertise.
Other changes and their impact on the industry
The biggest impacts of some of these announcements are likely to be not SaaS providers, but consultancies and specialists in niche developments and processes that Amazon has just made a bit easier. Extensions also came for Kinesis with Firehose for streaming data into the cloud, and Analytics of such real-time streamed data, AWS Config Rules for monitoring configuration compliance, Lambda extensions for Python and VPCs, and CloudWatch Dashboards for better monitoring visualisations.
All of these may take more chunks out of the application stacks for existing AWS application stacks or reduce the consulting efforts required for AWS development and migrations, however they don’t seem to fundamentally change the nature of the game beyond the standardisation of approaches to certain problems. Those are however all non-portable, AWS lock-in offerings so they probably don’t substantially change the nature of any business out there.
Amazon have again broadened their offerings to fill in the gaps, provided more cohesive packages between services and extended their reach into other segments, notably, business intelligence and mobile.
While there’s nothing there that gives us cold chills, it’s probably fair to say there will be a few people out there none too happy with the prospect of having to explain to clients why they shouldn’t just put their services on AWS.
As I said at the beginning though, in the end it all comes down to execution. I’m sure most of those companies mentioned are up to the challenge of distinguishing their service offerings and keeping ahead of the generalist that Amazon typically plays. Amazon excels in sweeping up those that are price sensitive or not particularly interested in the tailored experiences pure-plays are generally better at delivering, which has been a criticism of Amazon in the past.