Macro trends in the tech industry | Nov 2018
By
Published: November 14, 2018
Twice a year we create the șÚÁÏĂĆ Technology Radar, an opinionated look at whatâs happening in the enterprise tech world. We cover tools, techniques, languages, and platforms and we generally call out over one hundred individual âblipsâ. Along with this detail we write about a handful of overarching âthemesâ that can help a reader see the forest for the trees, and in this piece, I try to capture not just Radar themes, but wider trends across the tech industry today. These âmacro trendsâ articles are only possible with the help from the large technology community at șÚÁÏĂĆ, so Iâd like to thank everyone who contributes ideas and has commented on drafts.
Ìę
The largest (non-classified) quantum computer available as of this writing is small: . There are a lot of headlines indicating the forthcoming demise of conventional cryptography, but 2048-bit RSA keys likely require a quantum computer of at least 6,000 qubits in size, and more modern algorithms such as AES probably have better security against quantum attacks. A commercial quantum computer is expected to need at least 100 qubits, as well as improved stability and error correction over what is available today. Practical uses for quantum computing are still in the realm of research exercises, for example, modeling the properties of complex molecules in chemistry. For now, at least, mainstream enterprise use of quantum computing seems a long way off.
My colleague George Earle and I have recently written a detailing the imperative to modernize as well as a plan for doing it.
Ìę
In the cloud space right now, we see more and more organizations successfully move to the public cloud, with more mature conversations and understanding around what this means. Bigger companiesâeven banks and other regulated industriesâare moving larger and more sensitive workloads to the cloud, and bringing their regulators along on the journey. In some cases, this means theyâre mandated to pursue a multi-cloud strategy for those material workloads. Many of the blips on todayâs Radarâmulti-cloud sensibly, financial sympathy, and so onâare indicators that cloud is finally mainstream for all organizations and that the âlong tailâ of migration is here.
Where things get tricky is if an organization simply assumes that their workload is appropriate for serverless techniques and carries on regardless, or doesnât really do the math on whether itâs better to pay for on-use functions than setting up and maintaining a dedicated server instance. Weâd highlight two key areas where serverless needs to mature:
As a happy counterpoint to the problems with lingering antipatterns, in this Radar, we highlight that good practices are enduring in the industry. Whenever a new technology comes along, we all experiment with it, try to figure out which use cases are the best fit and the what the limits are of what it can and canât do. A good example of this is the recent emphasis on data and machine learning. Once that new thing has been experimented with, and weâve learnt what itâs good for and what it can do, we need to apply good engineering practices to it. In the machine learning case, weâd recommend applying automated testing and continuous delivery practicesâcombined we call this Continuous Intelligence. The point here is that all the practices weâve developed over the years to build software well continue to be applicable to all the new things. Doing a good job with the âcraftâ of software creation continues to be important, no matter how the underlying tech changes.
Thatâs it for this edition of Macro Trends. If youâve enjoyed this commentary, you might also like our recently re-launched podcasts series, where I am a host along with several of my șÚÁÏĂĆ colleagues. We release a podcast every two weeks covering topics such as agile data science, distributed systems, continuous intelligence and IoT. Check it out!
Quantum Computing is both here and not here
Weâre continuing to see traction in the quantum computing field. Academic institutions are partnering with commercial organizations, large investments are being made, and a community of startups and university spinouts is springing up. Microsoftâs Q# language allows developers to get started with quantum computing and run algorithms against simulated machines, as well as tap into real cloud-based quantum computers. IBM Q is its competing offering, again partnering with large commercial organizations, academia, and startups. At a local level, weâve hosted quantum computing hack nights with extremely good community turnout.Ìę
But quantum still isnât ready for prime-time.
The largest (non-classified) quantum computer available as of this writing is small: . There are a lot of headlines indicating the forthcoming demise of conventional cryptography, but 2048-bit RSA keys likely require a quantum computer of at least 6,000 qubits in size, and more modern algorithms such as AES probably have better security against quantum attacks. A commercial quantum computer is expected to need at least 100 qubits, as well as improved stability and error correction over what is available today. Practical uses for quantum computing are still in the realm of research exercises, for example, modeling the properties of complex molecules in chemistry. For now, at least, mainstream enterprise use of quantum computing seems a long way off.
Hyperkinetic pace of change
Weâve frequently observed that the pace of change in technology is not just fast: itâs accelerating. When we started the Radar a decade ago, the default for entries was to remain for two Radar editions (approximately one year) with no movement before they fade away automatically. However, as indicated by the formula in one of our Radar themesâpace = distance over timeâchange in the software development ecosystem continues to accelerate. Time has remained constant (we still create the Radar twice a year), but the distance traveled in terms of technology innovation has noticeably increased, providing yet more evidence of whatâs obvious to any astute observer: the pace of technology change continues to increase. We see an increased pace in all our Radar quadrants and also in our clientâs appetite to adopt new and diverse technology choices. Given that almost everything in the world today across business, politics, and society is driven by technology, the pace of change in all these other areas increases as well. An important corollary for businesses is that there will be much less time available to adopt new technologies and business modelsâitâs still âadapt or die,â but the pressure is higher now than ever before.For companies to compete, continuous modernization is required
The need to upgrade and replace older technology isnât newâfor as long as computers have been around a new model was in planning or just around the cornerâbut it does feel like the âvolume levelâ on the need to modernize has increased. Businesses need to move fast, and they canât do so encumbered by their legacy tech estate. Modern businesses compete to offer the best customer experiences, brand loyalty is largely dead, and the fastest movers are often the winners. This issue hits all companiesâeven the darlings of Silicon Valley and the startup unicorns of the worldâbecause almost as soon as something is in production, it can be considered legacy technology and an anchor rather than an asset. The success of these companies is in constantly upgrading and refining their technology and platforms.My colleague George Earle and I have recently written a detailing the imperative to modernize as well as a plan for doing it.
Industry catches up to previous big shifts
It was obvious to us that containers (especially Docker) and container platforms (especially Kubernetes) were important from the get-go. A couple of Radars ago, we declared that Kubernetes had won the battle and was the modern platform of choice; industry now seems to agree with us. There are a phenomenal number of Kubernetes-related blips on this edition of the RadarâKnative, gVisor, Rook, SPIFFE, kube-bench, Jaeger, Pulumi, Heptio Ark and acs-engine to name but a few. These all help with the Kubernetes ecosystem, configuration scanning, security auditing, disaster recovery and so on. All these tools help us to build clusters more easily and reliably.Lingering Enterprise Antipatterns
In this edition of the Radar, many of our âHoldâ entries are simply new ways to be misguided in putting together enterprise systems. We have new tools and platforms, but we tend to keep making the same mistakes. Here are a few examples:- Recreating ESB antipatterns with Kafkaâthis is the âegregious spaghetti boxâ all over again, where a perfectly good technology (Kafka) is being abused for the sake of centralization or efficiency.
- Overambitious API gatewaysâa perfectly good technology for access management and rate limiting of APIs also happen to have transformation and business logic added to it.
- Data-hungry packagesâwe buy a software package to do one thing, but it ends up taking over our organization, feeding on more and more data and accidentally becoming the âmasterâ for all of it, while requiring a lot of integration work too.
JavaScript community goes quiet
Weâve previously written about the churn in the JavaScript ecosystem, but the community appears to be emerging from a period of rapid growth to one with less excitement. Our contacts within the publishing industry tell us that searches for JavaScript-related content have been replaced by an interest in a group of languages led by Go, Rust, Kotlin, and Python. Could it be that has come to passâeverything that can be written in JavaScript has been written in JavaScriptâand developers have moved on to new languages? This could also be an effect of the rise of , where a polyglot approach is much more feasible, allowing developers to experiment with using the best language for each component. Either way, thereâs a lot less JavaScript on our Radar in this edition.Cloud happened, and itâs still happening
One of our themes on this Radar is the surprising âstickinessâ of cloud providers, who are in a tight race to win hosting business and often add features and services to improve the attractiveness of their product. Using these vendor-specific features can lead to accidental lock-in, but of course, will accelerate delivery, so are a bit of a double-edged sword.Ìę
As always, we recommend going in with your eyes open and evaluating use cases, lock-in potential, and the cost and impact of needing to switch providers.
In the cloud space right now, we see more and more organizations successfully move to the public cloud, with more mature conversations and understanding around what this means. Bigger companiesâeven banks and other regulated industriesâare moving larger and more sensitive workloads to the cloud, and bringing their regulators along on the journey. In some cases, this means theyâre mandated to pursue a multi-cloud strategy for those material workloads. Many of the blips on todayâs Radarâmulti-cloud sensibly, financial sympathy, and so onâare indicators that cloud is finally mainstream for all organizations and that the âlong tailâ of migration is here.
Serverless gains traction, but itâs not a slam dunk (yet)
ââ architectures are one of the biggest trends in todayâs IT landscape, but also possibly the most misunderstood. In this edition of the Radar, we actually donât highlight any blips for serverless techâweâve done so in the past, but this time around we felt nothing quite made the cut. Thatâs not to say things are quiet in the serverless space, however. Amazon recently released , something that is relatively rare for AWS services, and almost everything on the AWS platform has some sort of Lambda tie-in. The other major cloud vendors offer competing (but similar) services and tend to respond whenever Amazon makes a move in this space.Where things get tricky is if an organization simply assumes that their workload is appropriate for serverless techniques and carries on regardless, or doesnât really do the math on whether itâs better to pay for on-use functions than setting up and maintaining a dedicated server instance. Weâd highlight two key areas where serverless needs to mature:
- Patterns for use: Architectural and workload models where the approach is or isnât the right one. A better understanding is needed of how to compose an application from Serverless components as well as containers and virtual machines.
- Pricing model: Not well understood or easy to tune, leading to large bills and limited applicability. Ideally, we should compare Total Cost of Ownership including things like DevOps engineering time, server maintenance and so on.
Engineering for failure
In the past weâve highlighted Netflixâ testing tools that deliberately cause failures in a production system, so you can be sure that your architecture can tolerate failure. This Chaos Engineering has become more widespread and expanded to related areas. In this Radar, we highlight the 1% Canary and Security Chaos Engineering as specific instances of engineering for failure.
Enduring good practices
As a happy counterpoint to the problems with lingering antipatterns, in this Radar, we highlight that good practices are enduring in the industry. Whenever a new technology comes along, we all experiment with it, try to figure out which use cases are the best fit and the what the limits are of what it can and canât do. A good example of this is the recent emphasis on data and machine learning. Once that new thing has been experimented with, and weâve learnt what itâs good for and what it can do, we need to apply good engineering practices to it. In the machine learning case, weâd recommend applying automated testing and continuous delivery practicesâcombined we call this Continuous Intelligence. The point here is that all the practices weâve developed over the years to build software well continue to be applicable to all the new things. Doing a good job with the âcraftâ of software creation continues to be important, no matter how the underlying tech changes.Thatâs it for this edition of Macro Trends. If youâve enjoyed this commentary, you might also like our recently re-launched podcasts series, where I am a host along with several of my șÚÁÏĂĆ colleagues. We release a podcast every two weeks covering topics such as agile data science, distributed systems, continuous intelligence and IoT. Check it out!
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of șÚÁÏĂĆ.