Enable javascript in your browser for better experience. Need to know to enable it?

黑料门

Volume 31 | October 2024

Platforms

  • Platforms

    Adopt Trial Assess Hold Adopt Trial Assess Hold
  • New
  • Moved in/out
  • No change

Platforms

Adopt ?

No blips

Trial ?

  • 24. Databricks Unity Catalog

    is a data governance solution for assets such as files, tables or machine learning models in a . It's a managed version of the open-source that can be used to govern and query data kept in external stores or under Databricks management. In the past our teams have worked with a variety of data management solutions such as or . However, Unity Catalog's combined support for governance, metastore management and data discovery makes it attractive because it reduces the need to manage multiple tools. One complication our team discovered is the lack of automatic disaster recovery in the Databricks-managed Unity Catalog. They were able to configure their own backup and restore functionality but a Databricks-provided solution would have been more convenient. Note that even though these governance platforms usually implement a centralized solution to ensure consistency across workspaces and workloads, the responsibility to govern can still be federated by enabling individual teams to govern their own assets.

  • 25. FastChat

    is an open platform for training, serving and evaluating large language models. Our teams use its model-serving capabilities to host multiple models — , and — for different purposes, all in a consistent OpenAI API format. FastChat operates on a controller-worker architecture, allowing multiple workers to host different models. It supports worker types such as vLLM, LiteLLM and MLX. We use for their high throughput capabilities. Depending on the use case — latency or throughput — different types of FastChat model workers can be created and scaled. For example, the model used for code suggestions in developer IDEs requires low latency and can be scaled with multiple FastChat workers to handle concurrent requests efficiently. In contrast, the model used for Text-to-SQL doesn't need multiple workers due to lower demand or different performance requirements. Our teams leverage FastChat's scaling capabilities for A/B testing. We configure FastChat workers with the same model but different hyperparameter values and pose identical questions to each, identifying optimal hyperparameter values. When transitioning models in live services, we conduct A/B tests to ensure seamless migration. For example, we recently migrated from to for code suggestions. By running both models concurrently and comparing outputs, we verified the new model met or exceeded the previous model's performance without disrupting the developer experience.

  • 26. GCP Vertex AI Agent Builder

    provides a flexible platform for creating AI agents using natural language or a code-first approach. The tool seamlessly integrates with enterprise data through third-party connectors and has all the necessary tools to build, prototype and deploy AI agents. As the need for AI agents grows, many teams struggle with understanding their benefits and implementation. GCP Vertex AI Agent Builder makes it easier for developers to prototype agents quickly and handle complex data tasks with minimal setup. Our developers have found it particularly useful for building agent-based systems — such as knowledge bases or automated support systems — that efficiently manage both structured and unstructured data. That makes it a valuable tool for developing AI-driven solutions.

  • 27. Langfuse

    LLMs function as black boxes, making it difficult to determine their behavior. Observability is crucial for opening this black box and understanding how LLM applications operate in production. Our teams have had positive experiences with for observing, monitoring and evaluating LLM-based applications. Its tracing, analytics and evaluation capabilities allow us to analyze completion performance and accuracy, manage costs and latency and understand production usage patterns, thus facilitating continuous, data-driven improvements. Instrumentation data provides complete traceability of the request-response flow and intermediate steps, which can be used as test data to validate the application before deploying new changes. We've utilized Langfuse with RAG (retrieval-augmented generation), among other LLM architectures, and LLM-powered autonomous agents. In a RAG-based application, for example, analyzing low-scoring conversation traces helps identify which parts of the architecture — pre-retrieval, retrieval or generation — need refinement. Another option worth considering in this space is .

  • 28. Qdrant

    is an open-source vector similarity search engine and database written in Rust. It supports a wide range of text and multimodal dense . Our teams have used open-source embeddings like and for multiple product knowledge bases. We use Qdrant as an enterprise vector store with to store vector embeddings as separate collections, isolating each product's knowledge base in storage. User access policies are managed in the application layer.

  • 29. Vespa

    is an open-source search engine and big data processing platform. It's particularly well-suited for applications that require low latency and high throughput. Our teams like Vespa's ability to implement hybrid search using multiple retrieval techniques, to efficiently filter and sort many types of metadata, to implement multi-phased ranking, to index multiple vectors (e.g., for each chunk) per document without duplicating all the metadata into separately indexed documents and to retrieve data from multiple indexed fields at once.

Assess ?

  • 30. Azure AI Search

    , formerly known as Cognitive Search, is a cloud-based search service designed to handle structured and unstructured data for applications like knowledge bases, particularly in retrieval-augmented generation (RAG) setups. It supports various types of search, including keyword, vector and hybrid search, which we believe will become increasingly important. The service automatically ingests common unstructured data formats including PDF, DOC and PPT, streamlining the process of creating searchable content. Additionally, it integrates with other Azure services, such as Azure OpenAI, allowing users to build applications with minimal manual integration effort. From our experience, Azure AI Search performs reliably and is well-suited for projects hosted in the Azure environment. Through its , users can also define specific data processing steps. Overall, if you're working within the Azure ecosystem and need a robust search solution for a RAG application, Azure AI Search is worth considering.

  • 31. Databricks Delta Live Tables

    is a declarative framework designed for building reliable, maintainable and testable data processing pipelines. It allows data engineers to define data transformations using a declarative approach and automatically manages the underlying infrastructure and data flow. One of the standout features of Delta Live Tables is its robust monitoring capabilities. It provides a directed acyclic graph (DAG) of your entire data pipeline, visually representing data movement from source to final tables. This visibility is crucial for complex pipelines, helping data engineers and data scientists track data lineage and dependencies. Delta Live Tables is deeply integrated into the Databricks ecosystem, which also brings some challenges to customizing interfaces. We recommend teams carefully evaluate the compatibility of input and output interfaces before using Delta Live Tables.

  • 32. Elastisys Compliant Kubernetes

    is a specialized Kubernetes distribution designed to meet stringent regulatory and compliance requirements, particularly for organizations operating in highly regulated industries such as healthcare, finance and government. It has automated security processes, provides multicloud and on-premises support and is built on top of a zero-trust security architecture. The emphasis on built-in compliance with laws such as GDPR and HIPAA and controls like ISO27001 makes it an attractive option for companies that need a secure, compliant and reliable Kubernetes environment.

  • 33. FoundationDB

    is a multi-model database, acquired by Apple in 2015 and then open-sourced in April 2018. The core of FoundationDB is a distributed key-value store, which provides strict serializability transactions. Since we first mentioned it in the Radar, it has seen significant improvements — including smart data distributions to avoid write hotspots, a new storage engine, performance optimizations and support. We're using FoundationDB in one of our ongoing projects and are very impressed by its . This architecture allows us to scale different parts of the cluster independently. For example, we can adjust the number of transaction logs, storage servers and proxies based on our specific workload and hardware. Despite its extensive features, FoundationDB remains remarkably easy to run and operate large clusters.

  • 34. Golem

    Durable computing, a recent movement in distributed computing, uses an architecture style of explicit state machine to persist the memory of serverless servers for better fault tolerance and recovery. is one of the promoters of this movement. The concept can work in some scenarios, such as long-running microservices sagas or long-running workflows in AI agent orchestration. We've evaluated Temporal previously for similar purposes and Golem is another choice. With Golem you can write WebAssembly components in any supported language besides Golem being deterministic and supporting fast startup times. We think Golem is an exciting platform worth evaluating.

  • 35. Iggy

    , a persistent message streaming platform written in Rust, is a relatively new project with impressive features. It already supports multiple streams, topics and partitions, at-most-once delivery, message expiry and TLS support over QUIC, TCP and HTTP protocols. Running as a single server, Iggy currently achieves high throughput for both read and write operations. With upcoming clustering and io_uring support, Iggy can be a potential alternative to Kafka.

  • 36. Iroh

    is a relatively new distributed file storage and content delivery system that’s designed as an evolution of existing decentralized systems like IPFS (InterPlanetary File System). Both Iroh and IPFS can be used to create decentralized networks for storing, sharing and accessing content addressed using opaque content identifiers. However, such as having no maximum block size and providing a syncing mechanism for data via over documents. The project's roadmap includes bringing the technology to the browser via WASM, which raises some intriguing possibilities for building decentralization into web applications. If you don't want to host your own Iroh nodes, you can use its cloud service, . There are already several SDKs available in a variety of languages, and one goal is to be more user-friendly and easier to use than alternative IPFS systems. Even though Iroh is still in its very early days, it's worth keeping an eye on it, as it could become a significant player in the decentralized storage space.

  • 37. Large vision model (LVM) platforms

    Large language models (LLMs) grab so much of our attention these days, we tend to overlook ongoing developments in large vision models (LVMs). These models can be used to segment, synthesize, reconstruct and analyze video streams and images, sometimes in combination with diffusion models or standard convolutional neural networks. Despite the potential for LVMs to revolutionize the way we work with visual data, we still face significant challenges in adapting and applying them in production environments. Video data, for instance, presents unique engineering challenges for collecting training data, segmenting and labeling objects, fine-tuning models and then deploying the resulting models and monitoring them in production. So, while LLMs lend themselves to simple chat interfaces or plain text APIs, a computer vision engineer or data scientist must manage, version, annotate and analyze large quantities of streaming video data; this work requires a visual interface. LVM platforms are a new category of tools and services — including , and — that have emerged to address these challenges. Deepstream and Roboflow are particularly interesting to us because they combine an integrated GUI development environment for managing and annotating video streams with a set of Python, C++ or REST APIs to invoke the models from application code.

  • 38. OpenBCI Galea

    There is in the use of brain-computer interfaces (BCIs) and their potential application to assistive technologies. Non-invasive technologies using electroencephalography (EEG) and other electrophysical signals offer a lower risk alternative to brain implants for those recovering from injuries. Platforms are now emerging on which researchers and entrepreneurs can build innovative applications without having to worry about the low-level signal processing and integration challenges. Examples of such platforms are and which offer open-source hardware and software for building BCI applications. OpenBCI's latest product, the , combines BCI with the capabilities of a VR headset. It gives developers access to an array of time-locked physiological data streams along with spatial positioning sensors and eye tracking. This wide range of sensor data can then be used to control a variety of physical and digital devices. The SDK supports a range of languages and makes the sensor data available in Unity or Unreal. We're excited to see this capability offered in an open-source platform so researchers have access to the tools and data they need to innovate in this space.

  • 39. PGLite

    is a WASM build of a PostgreSQL database. Unlike previous attempts that required a Linux virtual machine, PGLite directly builds PostgreSQL to WASM, allowing you to run it entirely in the web browser. You can either create an ephemeral database in memory or persist it to disk via indexedDB. Since we last mentioned local-first applications in the Radar, the tooling has evolved considerably. With Electric and PGlite, you can now build reactive local-first applications on PostgreSQL.

  • 40. SpinKube

    is an open-source serverless run time for WebAssembly on Kubernetes. While Kubernetes offers robust auto-scaling capabilities, the cold start time of containers can still necessitate pre-provisioning for peak loads. We believe WebAssembly's millisecond startup time provides a more dynamic and flexible serverless solution for on-demand workloads. Since our previous discussion of Spin, the WebAssembly ecosystem has made significant advancements. We're excited to highlight SpinKube, a platform that simplifies the development and deployment of WebAssembly-based workloads on Kubernetes.

  • 41. Unblocked

    provides software development lifecycle (SDLC) asset and artifact discovery. It integrates with common application lifecycle management (ALM) and collaboration tools to help teams understand codebases and related resources. It improves code comprehension by delivering immediate, relevant context about the code, making it easier to navigate and understand complex systems. Engineering teams can securely and compliantly access discussions, assets and documents related to their work. Unblocked also captures and shares local knowledge that often resides with experienced team members, making valuable insights accessible to everyone, regardless of experience level.

Hold ?

No blips

Unable to find something you expected to see?

?

Each edition of the Radar features blips reflecting what we came across during the previous six months. We might have covered what you are looking for on a?previous Radar?already. We sometimes cull things just because there are too many to talk about. A blip might also be missing because the Radar reflects our experience, it is not based on a comprehensive market analysis.

Download the PDF

?

?

English?|?Espa?ol?|?笔辞谤迟耻驳耻ê蝉?|?中文

Sign up for the Technology Radar newsletter

?

Subscribe now

Visit our archive to read previous volumes