Notes on codes, projects and everything
Implementing a Information Retrieval system is a fun thing to do. However, doing it efficiently is not (at least to me). So my first few attempts didn’t really end well (mostly uses just Go/golang with some bash tricks here and there, with or without a database). Then I jumped back to Python, which I am more familiar with and was very surprised with all the options available. So I started with Pandas and Scikit-learn combo.
So I first heard about Panda probably a year ago when I was in my previous job. It looked nice, but I didn’t really get the chance to use it. So practically it is a library that makes data looks like a mix of relational database table and excel sheet. It is easy to do query with it, and provides a way to process it fast if you know how to do it properly (no, I don’t, so I cheated).
As the name implies, Resource Definition Framework, or RDF in short, is a language to represent information about resources in world wide web. Information that can be represented is mostly metadata like title (assuming the resource is a web-page), author, last modified date etc. Besides representing resource that is network-accessible, it can be used to represent things that cannot be accessed through the network, as long as it can be identified using a URI.
Recently I find some of my pet projects share a common pattern, they all are based on some kind of grids. So I find myself writing similar piece of code over and over again. While re-inventing wheels is quite fun, especially when you learn new way of getting things done with every iteration, it is actually quite tedious after a while.
Javascript is getting so foreign to me these days, but mostly towards a better direction. So I recently got myself to learn react through work and the JSX extension makes web development bearable again. On the other hand, I picked up a little bit on Vue.js but really hated all the magic involved (No I don’t enjoy putting in code into quotes).
I was asked to evaluate fuzzy c-means to find out whether it is a good clustering algorithm for my MPhil project. So I spent the whole afternoon reading through some tutorial to get some basic understanding. Then I thought why not implement it in Clojure because it doesn’t look too complicated (I was so wrong…).
Semantic Web is not just about putting data on the web, but also making links to allow a person as well as a machine to explore the web of data. Links are made in the web of data connects arbitrary things together as described by RDF as opposed to links in the web of hypertext, where links connects to only web-resources. Linkage of arbitrary things then allow related things to be found while performing search.