

WHAT MAKES LEXIKAT DIFFERENT?
Currently researchers doing text analysis use mathematical formulae based on word proximities. The principal method used is Latent Dirichlet allocation, or LDA. Lexikat is different. Instead it compares the themes in your document with concept maps crowdsourced via web search results. Our system uses the internet as a giant categorisation machine, allowing it to produce much more human-like results than alternative systems.
You can see the difference in the example projects below, made using two Wikipedia articles: those on Bruce Willis and Wyatt Earp. The LDA model took our researcher around a day and a half to code and test. The Lexikat word clouds took just three seconds to generate via the website. What's more, while the LDA model is static, the Lexikat results can be edited by the user. The system will remember your changes for you, meaning that you can customise your analysis however you like.

