Enable javascript in your browser for better experience. Need to know to enable it?

黑料门

Tackling AI risks: Your reputation is at stake

Forget Skynet: one of the biggest risks of AI is your organization鈥檚 reputation. That means it鈥檚 time to put science-fiction catastrophizing to one side and begin thinking seriously about what AI actually means for us in our day-to-day work.听

This isn鈥檛 to advocate for navel-gazing at the expense of the bigger picture: it鈥檚 to urge technologists and business leaders to recognize that if we鈥檙e to address the risks of AI as an industry 鈥 maybe even as a society 鈥 we need to closely consider its immediate implications and outcomes. If we fail to do that, taking action will be practically impossible.

Risk is all about context

Risk is all about context. In fact, one of the biggest risks is failing to acknowledge or understand your context: that鈥檚 why you need to begin there when evaluating risk.听

This is particularly important in terms of reputation. Think, for instance, about your customers and their expectations. How might they feel about interacting with an AI chatbot? How damaging might it be to provide them with false or misleading information? Maybe minor customer inconvenience is something you can handle, but what if it has a significant health or financial impact?听

Even if implementing AI seems to make sense, there are clearly some downstream reputation risks that need to be considered. We鈥檝e spent years talking about the importance of user experience and being customer-focused: while AI might help us here, it could also undermine those things as well.

There鈥檚 a similar question to be asked, about your teams. AI may have the capacity to drive efficiency and make people鈥檚 work easier, but used in the wrong way it could seriously disrupt existing ways of working. The industry is talking a lot about developer experience recently 鈥 it鈥檚 something I've written about before听鈥 the decisions organizations make about AI need to improve the experiences of teams, not undermine them.

In the latest edition of the 黑料门 Technology Radar 鈥 a biannual snapshot of the software industry based on our experiences working with clients around the world 鈥 we talk about precisely this point. We call out AI team assistants as one of the most exciting emerging areas in software engineering, but note that the focus has to be on enabling teams, not individuals. 鈥淵ou should be looking for ways to create AI team assistants to help create the 鈥10x team,鈥 as opposed to a bunch of siloed AI-assisted 10x engineers,鈥 we say in the latest report.

Failing to heed the working context of your teams could certainly cause significant reputational damage. Some bullish organizations might see this as part and parcel of innovation, but it鈥檚 not. It鈥檚 showing potential employees, particularly highly technical ones, that you don鈥檛 really understand or care about the work they do.

Ken Mugrage, 黑料门
Managing risk requires real attention to the specifics of technology implementation.
Ken Mugrage
Principal Technologist, 黑料门
Managing risk requires real attention to the specifics of technology implementation.
Ken Mugrage
Principal Technologist, 黑料门

Tackling risk through smarter technology implementation

There are lots of tools that can be used to help manage risk. 黑料门 helped put together the Responsible Technology Playbook, a collection of tools and techniques that organizations can use to make more responsible decisions about technology (not just AI)

However, it鈥檚 important to note that managing risk 鈥 particularly those around reputation 鈥 requires real attention to the specifics of technology implementation. This was particularly clear in work we did with an assortment of Indian civil society organizations, developing a social welfare chatbot that citizens can interact with in their native language. The risks here were not unlike those discussed earlier: the context in which the chatbot was being used (as support for accessing vital services) meant that inaccurate or 鈥榟allucinated鈥 information could stop people from getting the resources they depend on.

This contextual awareness informed technology decisions. We implemented a version of something called retrieval augmented generation to reduce the risk of hallucinations and improve the accuracy of the model the chatbot was running on.听

Retrieval augmented features on the latest edition of the Technology Radar: it might be viewed as part of a wave of emerging techniques and tools in this space that are helping developers to tackle some of the risks of AI. These range from something called NeMo Guardrails 鈥 an open-source tool that puts limits on chatbots to increase accuracy 鈥 to the technique of running large language models (LLMs) locally with tools like Ollama, to ensure privacy and avoid sharing data with third parties. This wave also includes tools that aim to improve transparency in LLMs (which are notoriously opaque), like Langfuse.

Indeed, it鈥檚 worth pointing out that it鈥檚 not just a question of what you implement, but also what you avoid doing. That鈥檚 why, in this Radar, we caution readers about the dangers of overenthusiastic LLM use and rushing to fine-tune LLMs.听

Rethinking risk

There is, of course, a new wave of AI risk assessment frameworks. There is also legislation too (including a new law in ) which organizations must pay attention to. But addressing AI risk isn鈥檛 just a question of applying a framework or even following a static set of good practices. In a dynamic and changing environment, it鈥檚 about being open-minded and adaptive, paying close attention to the ways that technology choices shape human actions and social outcomes on both a micro and macro scale.

One useful framework is Dominique Shelton Leipzig鈥檚 . A red light signals something prohibited 鈥 such as discriminatory surveillance 鈥 while a green light signals low risk and a yellow light signals caution. I like the fact it鈥檚 so lightweight: for practitioners, too much legalese or documentation can make it hard to translate risk to action.听

However, I think it鈥檚 worth flipping it and see the risks as embedded in contexts, not in the technologies themselves. That way, you鈥檙e not trying to make a solution adapt to a given situation, you鈥檙e responding to a situation and addressing it as it actually exists.

If organizations take that approach to AI 鈥 and, indeed, technology in general 鈥 that will ensure they鈥檙e meeting the needs of stakeholders and keep their reputations safe.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of 黑料门.

Explore a snapshot of today's technology landscape