Tackling AI dangers: Your repute is at stake


Risk is all about context

Risk is all about context. In reality, one of many greatest dangers is failing to acknowledge or perceive your context: That’s why you could start there when evaluating threat.

This is especially necessary when it comes to repute. Think, for example, about your clients and their expectations. How may they really feel about interacting with an AI chatbot? How damaging may or not it’s to supply them with false or deceptive data? Maybe minor buyer inconvenience is one thing you may deal with, however what if it has a major well being or monetary affect?

Even if implementing AI appears to make sense, there are clearly some downstream repute dangers that must be thought-about. We’ve spent years speaking concerning the significance of person expertise and being customer-focused: While AI may assist us right here, it may additionally undermine these issues as effectively.

There’s an analogous query to be requested about your groups. AI could have the capability to drive effectivity and make folks’s work simpler, however used within the flawed means it may critically disrupt present methods of working. The business is speaking loads about developer expertise lately—it’s one thing I wrote about for this publication—and the choices organizations make about AI want to enhance the experiences of groups, not undermine them.

In the newest version of the Thoughtworks Technology Radar—a biannual snapshot of the software program business based mostly on our experiences working with purchasers all over the world—we discuss exactly this level. We name out AI workforce assistants as one of the thrilling rising areas in software program engineering, however we additionally observe that the main target must be on enabling groups, not people. “You ought to be on the lookout for methods to create AI workforce assistants to assist create the ‘10x workforce,’ versus a bunch of siloed AI-assisted 10x engineers,” we are saying within the newest report.

Failing to heed the working context of your groups may trigger vital reputational harm. Some bullish organizations may see this as half and parcel of innovation—it’s not. It’s displaying potential staff—notably extremely technical ones—that you simply don’t actually perceive or care concerning the work they do.

Tackling threat by means of smarter expertise implementation

There are plenty of instruments that can be utilized to assist handle threat. Thoughtworks helped put collectively the Responsible Technology Playbook, a group of instruments and methods that organizations can use to make extra accountable selections about expertise (not simply AI).

However, it’s necessary to notice that managing dangers—notably these round repute—requires actual consideration to the specifics of expertise implementation. This was notably clear in work we did with an assortment of Indian civil society organizations, creating a social welfare chatbot that residents can work together with of their native languages. The dangers right here weren’t in contrast to these mentioned earlier: The context by which the chatbot was getting used (as help for accessing very important companies) meant that incorrect or “hallucinated” data may cease folks from getting the sources they rely on.



Source hyperlink

Leave a Reply

Your email address will not be published. Required fields are marked *