Technology Radar Vol 31
黑料门 Technology Radar is a twice-yearly snapshot of tools, techniques, platforms, languages and frameworks. This knowledge-sharing tool is based on our global teams’ experience and highlights things you may want to explore on your projects.?
?
Each insight we share is represented by a blip. Blips may be new to the latest Radar volume, or they can move rings as our recommendation has changed.?
?
The rings are:
Adopt. Blips that we think you should seriously consider using.
Trial. Things we think are ready for use, but not as completely proven as those in the Adopt ring.?
Assess. Things to look at closely, but not necessarily trial yet — unless you think they would be a particularly good fit for you.
Hold. Proceed with caution.
?
Explore the interactive version by quadrant, or download the PDF to read the Radar in full. If you want to learn more about the Radar, how to use it or how it’s built, check out the FAQ.
?
Themes for this volume
?
For each volume of the Technology Radar, we look for patterns emerging in the blips that we discuss. Those patterns form the basis of our themes.?
?
Coding assistance antipatterns
To the surprise of no one, generative AI and LLMs dominated our conversations for this edition of the Radar, including emerging patterns around their use by developers. Patterns inevitably lead to antipatterns — contextualized situations developers should avoid. We see some antipatterns starting to appear in the hyperactive AI space, including the mistaken notion that?humans can fully replace pair programming with AI as the companion, overreliance on coding assistance suggestions, code quality issues with generated code and faster growth rates of codebases. AI tends to solve problems via brute force rather than use abstractions, such as using dozens of stacked conditionals rather than the Strategy design pattern. The code quality issues in particular highlight an area of continued diligence by developers and architects to make sure they don’t drown in "working-but-terrible" code. Thus, team members should?double down on good engineering practices?— such as unit testing, architectural fitness functions and other proven governance and validation techniques — to make sure that AI is helping your endeavors rather than encrypting your codebase with complexity.
Rust is anything but rusty
Rust has gradually become the systems programming language of choice. In every Radar session, Rust comes up in the subtext of our conversations over and over; many of the tools we discuss are written in Rust. It's the language of choice when replacing older system-level utilities as well as when re-writing part of an ecosystem for improved performance — indeed, the most common epithet for Rust-based tools seems to be “blazingly fast.” For example, we see several tools in the Python ecosystem that have Rust-based alternatives to support noticeably better performance. The language designers and the community managed to create a well-liked ecosystem of core SDKs, libraries and dev tools, while providing stellar execution speed with fewer pitfalls than many of its predecessors. Many on our team are fans of Rust, and it seems that most developers who use it hold it in high regard.
The gradual rise of WASM
WASM (WebAssembly) is a binary instruction format for a stack-based virtual machine, which sounds esoteric and too low level for most developer interests until people see the implications: the ability to run complex applications within a browser sandbox. WASM can run within existing JavaScript virtual machines, allowing applications that developers could formerly only implement in native frameworks and extensions to be embeddable within browsers. The four major browsers now support WASM 1.0 (Chrome, Firefox, Safari and Edge), opening exciting possibilities for sophisticated portable and cross-platform development. We've watched this standard over the last few years with great interest, and we're happy to see it start to flex its capabilities as a legitimate deployment target.
The Cambrian explosion of generative AI tools
Following the trajectory set out in the last few volumes of the Radar, we expected generative AI to feature prominently in our discussions. And, yet, we were still surprised by the explosion in the ecosystem of technology supporting language models: guardrails, evals, tools to build agents, frameworks to work with structured output, vector databases, cloud services and observability tools. In many ways, this rapid and varied growth makes perfect sense, though: The initial experience, the simplicity of a plain text prompt to a language model, has given way to engineering of software products. These may not live up to the dreams and wild claims that were made after people sent their first prompts to ChatGPT, but we see sensible and productive use of generative AI at many of our clients, and all these tools, platforms and frameworks play a role in getting LLM-based solutions into production. As was the case with the explosion of the JavaScript ecosystem around 2015, we expect this chaotic growth to continue for a while.
Contributors
?
The Technology Radar is prepared by the 黑料门 Technology Advisory Board, comprised of:
?
Rachel Laycock (CTO)???Martin Fowler?(Chief Scientist)???Rebecca Parsons?(CTO Emerita)???Bharani Subramaniam???Birgitta B?ckeler???Camilla Falconi Crispim???Erik Doernenburg???James Lewis?? Ken Mugrage???Maya Ormaza???Mike Mason???Neal Ford?? Pawan Shah???Scott Shaw?? Selvakumar Natesan???Shangqi Liu???Sofia Tania?? Thomas Squeo????Vanya Seth???Will Amaral
Inside the Technology Radar?is a short documentary that provides a fresh insight into all things Technology Radar.??
Subscribe. Stay informed.
Sign up to receive emails about future Technology Radar releases and bi-monthly tech insights from 黑料门.