This post describes a simple principle to split documents into coherent segments, using word embeddings. Then we present two implementations of it. Firstly, we describe a greedy algorithm, which has linear complexity and runtime in the order of typical preprocessing steps (like sentence splitting, count vectorising). Secondly, we present an algorithm that computes the optimal solution to the objective given by the principle, but is of quadratic complexity in the document lengths.
Hierarchical softmax is a more efficient way to train word embeddings compared to a regular softmax output layer. It has been shown that for language modeling the choice of tree affects the outcome significantly. In this blog post we describe an experiment to construct semantic trees and show how they can improve the quality of the learned embeddings in common word analogy and similarity tasks.
How can you learn a map from a German language to an English language word vectorisation model, to enable crosslingual document comparison?
In this blog post we demonstrate how to generate a dataset for recommending Reddit posts based on semantic similarity. The Reddit API and the PRAW Python library are used to extract data from the AskScience subreddit. The posts are then analysed using LIP and built into a Chrome extension for searching similar content.
We have many different ways of delivering the Lateral API to clients who would like to install it in their own environment. One of those is as an Azure VHD for deployment to Azure VMs. In this post I will cover how to create a VHD that is fully compatible with Azure from an Ubuntu Cloud Image base.
Facebook Research’s new fastText library can learn the meaning of metadata from the text it labels. By labelling documents with the users who read them, we used fastText to hack together a “hybrid recommender” system, able to recommend documents to users using both collaborative information (“people who read this also liked that”) and whether the text in the documents is thematically similar to things they read previously. Early signs are it performs quite well, so we’ll continue to experiment with it.
Wikipedia is one of the most widely used websites globally. We built a simple extension to that displays similar pages at the top of every Wikipedia page!
A technique we use to visualise how Lateral recommendations would look and work on a website is to create a Chrome extension that inserts the recommendations at load time. In this blog post, I will create a Chrome extension that modifies this blog to set a custom background and to modify the HTML.
Give me five is an open source Chrome extension that allows you to recommend the content you push to Lateral based on the content of the page you’re currently visiting. It’s the same code base that the NewsBot Chrome extension is built upon.
What kind of language do British parliamentarians use? We scraped, parsed and vectorised a sample of recent debates from the House of Commons. We then applied a k-means clustering algorithm to these vectors, and created a word cloud for each cluster.