Enable javascript in your browser for better experience. Need to know to enable it?

Macro trends in the tech industry | April 2024

Volume 30 of the Technology Radar is here, and we once again present a discussion of the macro trends that were discussed during our Radar meeting, as well as additional trends crowd-sourced from the broader community. Some of these ended up as themes in the latest volume, while others landed on the cutting room floor.

All things AI

Of course, all things AI and GenAI played a large role in our discussions. In the end, we have 34 blips that are GenAI-related. As for our themes, we have two that are GenAI specific, including AI-assisted software development teams and emerging architectural patterns for LLMs. In addition, we had candidate themes around topics both general and quite specific. We discussed the distinction between GenAI and plain old regular AI (or even statistical techniques for that matter). While GenAI is clearly the bright shiny object, let’s not forget that there are problems that are better suited to non-GenAI techniques.

The footprint of AI applicability is growing rapidly, impacting varied processes, use cases and organizational units. We are seeing GenAI and LLMs increasingly deployed at the edges, which potentially enhances privacy for users. AI is also becoming more common in digital products, even though it is often hidden. Autonomous AI agents are also tackling a broader range of problems, moving AI away from just the prompt interface. Of course, as more of these applications move from the proof of concept stage to production, we have the associated need for MLOps, LLMOps, LLM observability and all the other solid engineering practices that we use for more traditional production systems.

We are still learning about the different ways we can use AI; sometimes, to take advantage of AI requires changing the problem solving approach. Such culture and mindset shifts are challenging, but they will be necessary to take full advantage of the capabilities of AI.

We are also seeing signs, at least anecdotally, of companies working to reduce their workforces in anticipation of the potential and realized productivity improvements arising from the use of AI. The rate that this is occurring varies widely based on the type of work being addressed. Ideally, the AI systems handle the “low hanging fruit” questions and problems, freeing up the humans to do what they do best using their empathy or creativity, for example. Since we are still learning what these AI approaches are good at, and where they run into trouble, reaching that ideal state is a challenge.

Let’s move on from AI.

Is open source still viable?

has throughout our history been a big proponent of and contributor to open source, which is why we are so dismayed about many licensing moves by various open source vendors. Different approaches include using a license that is free for non-commercial use but requires a license for commercial use, as Hashicorp has done. Another approach has the open source project only distributing code, not builds, increasing the burden on organizations using the project on prem. Others use a freemium model, with restricted features for free, and licenses required for more advanced features.

A lot of blame has been placed on private equity and venture capital firms for putting more pressure on firms for revenue and profitability, particularly as the tech industry has slowed. Others speculate that the open source vendors are only protecting themselves and their intellectual property (IP) from the cloud vendors who would profit from the IP through hosted cloud services. The business model around open source is clearly changing, though, and we will continue to see churn in the open source world.


Returning briefly to AI, there are lots of debates going on about what does open source mean in the context of large language models? Is it the algorithm to use the model, to train the model, the parameters resulting from the training or the training data that is most important to open source? The Open Source Initiative has a working group trying to about what it means for models to be ‘open’.

Online developer tools

We are seeing both increased availability of online developer tools and increased concern about what developers give up by using such tools — mostly their data. Data leakage through the hosting of data by third parties is becoming a significant concern. Default settings don’t always reflect a privacy-first perspective, and some tools have tried to eliminate self-hosting options. With the increased use of AI coding assistants, intellectual property issues are being increasingly scrutinized.

Monoliths rise again

Recent trends such as the rise of microservices, increased interest in event driven architectures and other distributed architectures have increased the complexity of deployments and of the systems themselves. While often the evolvability and flexibility of these architectural approaches are worth the complexity, we are seeing a trend towards returning to a monolithic approach, the modular monolithic approach to be precise. Indeed, it’s almost like monoliths are cool again. As with anything having to do with architecture, there are advantages and disadvantages to both the monolithic approach and the more distributed approaches. As I wrote several years ago, microservices would never go in the Adopt ring, because there were too many times you wouldn’t want to use them.

Rethinking infrastructure as code

Infrastructure as code (IaC) has been around for a long time, bringing to the provisioning of infrastructure many of the practices and advantages of working with more traditional code. However, there are different approaches to specifying infrastructure through code and then managing the resulting infrastructure code. There were a few blip proposals for different, new approaches to managing IaC, evidenced by an emerging move toward infrastructure orchestration platforms. We think Winglang is a particularly interesting tool because of the way it offers high-level abstractions over specific parts of your infrastructure. However, it’s safe to say we’ve not yet landed on an overall approach that addresses different sets of concerns.

Rust everywhere

Cybersecurity issues continue to plague our industry. Memory safety vulnerabilities enable a significant number of the exploits we deal with. Advances have been made in security testing and scanning — thanks to interesting tools such as MobSF, Wiz and Orca — but exploits continue and finding these kinds of issues is difficult for a code review process to find.

Using a memory safe programming language is a quick way to prevent these vulnerabilities from occurring in our code. Rust is one such language, and it has the added benefit of being highly performant. The Rust ecosystem continues to expand, as more tools are developed to support programming in Rust. Efforts are underway to get Rust certified for various embedded use cases. Expanding the tools space and growing Rust expertise will certainly help in our fight against security vulnerabilities.

What happened to mixed reality?

Reviews abound for the Apple Vision Pro, and most of them aren’t terribly flattering. There was such anticipation that Apple’s release would trigger the creation of applications in extended reality and spatial computing. However, the applications haven’t really changed much. We still see uses in, of course, gaming, as well as collaborative spaces, remote maintenance and training, to name a few.

We’re seeing increasing interest in using this technology to support digital twins, allowing for what-if scenario simulations for manufacturing and supply chain. It appears that we are back in wait and see mode regarding new applications of XR in businesses.

Pull requests and continuous integration

is a firm believer in the power of true continuous integration (CI). For us, you can’t achieve true CI without trunk based development. However, pull requests are quite common. We certainly do acknowledge that pull requests are preferable in certain situations, like distributed open source projects with lots of community input, but we still believe that trunk based development is superior for many organizations. What we’ve been seeing recently is that there are an increasing number of tools available to try to get closer to true CI while continuing to use the pull request model.

Some of these tools are using AI to automate some of the work associated with merging certain kinds of pull requests. It’s unclear why the pull request model has such a passionate following, but anything that drags pull requests closer to allowing true CI has to be a good thing.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of .

Explore a snapshot of today's technology landscape