How much will you trust ‘personalized’ search?

by | Feb 20, 2018 | Digital workplace, Search

The promise of personalized search results is alluring, especially when the promise is made more enticing by offering to deliver not a document but ‘information’. An Attivio White Paper on The Cognitive Step – How Search Will Improve states

“The core capabilities of cognitive search are focused upon precision, certainty, and ease of use. These are the elements that will restore confidence in and satisfaction with search.”

The approach which is being widely promoted is that a combination of AI, machine learning and advances in natural language processing will take a wide range of weak signals (position, role, previous searches and much more) and use them to respond to a query with a result that is specific to the user at that moment in time. Google has long set a standard in this respect and now enterprise search vendors are promising to deliver the same benefits in an organizational environment. In the case of Google it is helped significantly in its work by the fact that people want their information to be found, and are smart enough to know what they can to do to improve their ranking. No one in the enterprise has an incentive to make their information more findable though arguably they should have.

The concept of personalized search profiles dates back to the work of H.P.Luhn at IBM in 1958 and by 2000 the development of novel algorithms for this purpose was well underway. As you might imagine Microsoft has been active in this area. Take, for example, a very interesting research paper published by Microsoft on how short term and long term query personalization data complement each other. However as with most of the published research it is based on the outcome of tests on public web sites. Comparatively little research has been carried out in an enterprise search context, which as a recent research paper shows is somewhat different to web site search. There are a number of new approaches under development, notably AV-AT which seeks to incorporate human emotion into intelligent computer systems.

Which brings me to the matter of trust. Your new Kognitive Insight Corporation Search (aka KICS) application may deliver the seemingly ultimately precise information, but (to go back to the Attivio quote) what is the certainty that it is the optimum result? How confident can you be that the weighting of all the weak signals is appropriate to your requirements. Is location weighted more heavily than role, or has too much emphasis been placed on three weeks of queries you posted when one of your colleagues was ill and you had to take on a different role? How long will it be before the search application realizes this was an exceptional situation? A day, a week, somewhen? To what extent can you modify the weak signals? How many results do you want to see to be able to place the ‘best’ result in context? Just some of the questions you need to have answers to.

So often when search does not work it is not necessarily a technology problem. I have recently written about the propensity for employees to work around systems, not because they are not usable but because the information is of poor quality because there are no guidelines on information quality. No matter how good the ranking model, how clever the personalization and how relevant (at first sight) the information, if in the end you have doubts about the quality and trustworthiness then the relevance is irrelevant.

It is easy to make a case for personalized search and the good news is that technical solutions are building on 60 years of research. Current generation AI/ML systems enable more parameters to be processed more quickly than has been possible in the past. The big question, which can only be answered on a case by case basis, is whether your employees will feel able to trust the mathematics to deliver a result that they trust to the extent that they are willing to bet their reputation (and potentially their career) on the outcome.

(For a consideration of exploratory search see this post)

Martin White