Notes on codes, projects and everything
A friend of mine recently posted a screenshot containing a code snippet for a fairly straight forward problem. So after reading the solution I suddenly had the itch to propose another solution that I initially thought would be better (SPOILER: Turns out it isn’t). Then mysteriously I stuck myself to my seat and started coding an alternative solution to it instead of playing Diablo 3 just now.
After publishing the previous note on setting up my development environment, I find myself spending more time in the CLI (usually via SSH from host). Then I find myself not needing all the GUI apps in a standard Ubuntu desktop environment so I went ahead and set up a new environment based on Ubuntu Quantal server edition beta-1. For some reason my network stopped working and didn’t really want to spend time finding out the cause, so I reinstalled everything again today using the final installer, as well as the updated Virtualbox 4.2.6.
Implementing a Information Retrieval system is a fun thing to do. However, doing it efficiently is not (at least to me). So my first few attempts didn’t really end well (mostly uses just Go/golang with some bash tricks here and there, with or without a database). Then I jumped back to Python, which I am more familiar with and was very surprised with all the options available. So I started with Pandas and Scikit-learn combo.
Sometimes I really doubt about the advantage of recycling old stuff to fund for new units beyond goodwill. Sure you get to convince yourself that you are saving the environment by doing so, and it also saves money in the long run. However, I didn’t realize how much it generates it may be after trying to work out an answer for a fictional IQ question.
This is basically a small incremental update to my script published here. For some reason, the previous version of the script didn’t really work, so this release should fix the problem. Besides fixing the problem where the daemon did not actually launched at start up, I have added a settings applet for this script as well.
Recently I switched my search code to Annoy because the input dataset is huge (7.5mil records with 20k dictionary count). It wasn’t without issues though, however I would probably talk about it next time. In order to figure out what each parameters meant, I spent some time watching through the talk given by the author @fulhack.