Designing Agentive Technology: Part 1

Over the past months we’ve been delighted to have Natalie Jensen join us as an intern in the San Francisco Labs. Among other topics, Natalie has been looking into how to design ‘Agentive Technology’ – a term coined by Chris Noessel and discussed in depth in his 2017 book, Designing Agentive Technology: AI That Works for People. Read her thoughts in the post below.

Agentive Technology

On December 10th, I attended an AI meetup where Chris Noessel spoke about what agentive technology is, how it’s already being used, and what it means from a design standpoint.

Upon arrival, the word agentive seemed entirely foreign to me, but looking at its origin, ‘agent’, can explain the technology fairly well. Agentive technology takes into account a user’s interest and acts as an agent to cater to those interests. This helps the user to a achieve a goal with minimal effort, allowing their attention to turn elsewhere while the agent actively does the work. One example of this is a followed search, where a few initial searches and clicks turns into tailored results being pushed to you – think Amazon’s “Recommended to You” or Spotify’s “Daily Mixes”. Eventually the user doesn’t have to search for what they want, and what they want is instead presented to them. Another example of this is Waze, which constantly re-evaluates current traffic distribution to determine if taking an alternative route would be quicker. This eliminates potential error (“Is this shortcut actually shorter?”) and saves the user time.

When designing for agentive technology, designers need to focus on the goal rather than the task. An example of this is the vacuum, which for years was continuously redesigned to be lighter, more powerful, and more efficient – then came Roomba. Roomba can vacuum (the task) without human interaction and leaves floors clean (the goal). Although mostly void of human interaction, agentive technology still needs to allow for the user to start, monitor, tune, and stop the agent in order for it to perform as desired per the user. Thus the goal of agentive technology is for the user to be able to disengage after setting the agent up to their liking or until they need to tune it again.

When designing these agents, companies also need to be aware of potential consumer concerns. The first being control and subjectivity. Since agents are software that exist on various mediums (Apple, Spotify, Waze, etc.) they can be subject to pushing biased results that are favorable to the company and its partners interests. By not promising subjectivity, companies can avoid some backlash, but agent algorithms still need to be monitored against abuse.

Another concern is security, being that the agent monitors your data and creates a detailed model of the user. Even the simplest information stored by the agent could be used to support identity theft, the potential security risks are serious.

Companies must take into account all of these concerns from an ethical stand-point and actively work to protect their consumers. This approach is here to stay, and it’s potential negative by-products will too if not addressed.

Intrigued by agentive technology? Natalie will be following up with a subsequent article on this topic, watch this space.

The Labs Team

Sutherland Labs