Notes on codes, projects and everything
Been trying my best to stick to the well-known UNIX Philosophy – “Do one thing and do it well”, so I have been breaking down my projects into numerous pieces of small tasks and rely on existing tools whenever possible. One of the existing tool that I use a lot is the GNU sort tool. Generally sort utility is really doing fine and dandy without having to configure anything, at least not until I realize the problem that leads to this post.
Folksonomy is a neologism of two words, ’folk’ and ’taxonomy’ which describes conceptual structures created by users [4, 5]. A folksonomy is a set of unstructured collaborative usage of tags for content classification and knowledge representation that is popularized by Web 2.0 and social applications [1, 5]. Unlike taxonomy that is commonly used to organize resources to form a category hierarchy, folksonomy is non-hierarchical and non-exclusive [3]. Both content hierarchy and folksonomy can be used together to better content classification.
This update took me quite a bit more time than I initially expected. Anyway, I have done some refactoring work to the original code, and thought it would be nice to document the changes. Overall, most of the changes involved the refactoring of function names. I am not sure if this would stick, but I am quite satisfied for now.
Semantic Web always sounds like some magic power stuff that a group of people keep yelling about. Chances are, if one is into web development, he/she would have heard of it somehow or other. However, despite the supposedly wide awareness about it, are we using it? Or rather, am I publishing enough data to Semantic Web? OK, I don’t, but why?
I haven’t got much time lately, so didn’t write about this new phone that I recently imported. For some reason, this new phone of mine do not act as mass storage device like its predecessors (to certain extend). Thankfully I can still ssh in the phone and this makes it possible to mount it as a sshfs volume.
Recently I switched my search code to Annoy because the input dataset is huge (7.5mil records with 20k dictionary count). It wasn’t without issues though, however I would probably talk about it next time. In order to figure out what each parameters meant, I spent some time watching through the talk given by the author @fulhack.