Notes on codes, projects and everything
Recently I switched my search code to Annoy because the input dataset is huge (7.5mil records with 20k dictionary count). It wasn’t without issues though, however I would probably talk about it next time. In order to figure out what each parameters meant, I spent some time watching through the talk given by the author @fulhack.
There are a lot of things I want to post to both here and my personal blogs. However I was sucked into sanctuary for the most of last month. I guess after a month of playing, it is probably time to slowly resume my personal projects.
I wanted to try using virtuoso as the storage engine for Redland but unfortunately there is no librdf-storage-virtuoso package for Ubuntu. After getting some help from @dajobe, I attempted to build the packages myself. Although it takes quite some time to build packages, but not too difficult it seems.
After a miserable trip back to academic world, I finally re-gained the courage to get back to job-market. For the time spent in university, I spent quite some time reading about Semantic Web and RDF. Then I thought, I should have published more in this format in future. However, that didn’t really happen, mostly because I am too lazy.
Just happened to see this post a few months ago, and the author created another cloud that uses almost the same technique to ‘visualize’ a list of countries. The author uses PHP to generate the cloud originally and I thought I may be able to do in javascript. After some quick coding I managed to produce something similar to the first example, source code after the jump.
Implementing a Information Retrieval system is a fun thing to do. However, doing it efficiently is not (at least to me). So my first few attempts didn’t really end well (mostly uses just Go/golang with some bash tricks here and there, with or without a database). Then I jumped back to Python, which I am more familiar with and was very surprised with all the options available. So I started with Pandas and Scikit-learn combo.