Enable javascript in your browser for better experience. Need to know to enable it?

黑料门

Last updated : Sep 27, 2023
NOT ON THE CURRENT EDITION
This blip is not on the current edition of the Radar. If it was on one of the last few editions, it is likely that it is still relevant. If the blip is older, it might no longer be relevant and our assessment might be different today. Unfortunately, we simply don't have the bandwidth to continuously review blips from previous editions of the Radar. Understand more
Sep 2023
Assess ?

Large language models (LLMs) generally require significant GPU infrastructure to operate, but there has been a strong push to get them running on more modest hardware. Quantization of a large model can reduce memory requirements, allowing a high-fidelity model to run on less expensive hardware or even a CPU. Efforts such as make it possible to run LLMs on hardware including Raspberry Pis, laptops and commodity servers.

Many organizations are deploying self-hosted LLMs. This is often due to security or privacy concerns, or, sometimes, a need to run models on edge devices. Open-source examples include , and Llama. This approach offers better control of the model in fine-tuning for a specific use case, improved security and privacy as well as offline access. Although we've helped some of our clients self-host open-source LLMs for code completion, we recommend you carefully assess the organizational capabilities and the cost of running such LLMs before making the decision to self-host.

Apr 2023
Assess ?

Large language models (LLMs) generally require significant GPU infrastructure to operate. We're now starting to see ports, like , that make it possible to run LLMs on different hardware — including Raspberry Pis, laptops and commodity servers. As such, self-hosted LLMs are now a reality, with open-source examples including , and . This approach has several benefits, offering better control in fine-tuning for a specific use case, improved security and privacy as well as offline access. However, you should carefully assess the capability within the organization and the cost of running such LLMs before making the decision to self-host.

Published : Apr 26, 2023

Download the PDF

?

?

?

English?|?Espa?ol?|?笔辞谤迟耻驳耻ê蝉?|?中文

Sign up for the Technology Radar newsletter

?

Subscribe now

Visit our archive to read previous volumes