Notes on codes, projects and everything
Back then, when I was still working on my postgraduate degree research, I used RDF, which was the preferred format in the world of Semantic Web to represent data. I eventually dropped the degree, and stopped following the development of the related technology and standards. Until I volunteered to update the import script for popit when I was looking for the next job/project.
(more…)After a miserable trip back to academic world, I finally re-gained the courage to get back to job-market. For the time spent in university, I spent quite some time reading about Semantic Web and RDF. Then I thought, I should have published more in this format in future. However, that didn’t really happen, mostly because I am too lazy.
As the name implies, Resource Definition Framework, or RDF in short, is a language to represent information about resources in world wide web. Information that can be represented is mostly metadata like title (assuming the resource is a web-page), author, last modified date etc. Besides representing resource that is network-accessible, it can be used to represent things that cannot be accessed through the network, as long as it can be identified using a URI.
Recently I find some of my pet projects share a common pattern, they all are based on some kind of grids. So I find myself writing similar piece of code over and over again. While re-inventing wheels is quite fun, especially when you learn new way of getting things done with every iteration, it is actually quite tedious after a while.
Recently I switched my search code to Annoy because the input dataset is huge (7.5mil records with 20k dictionary count). It wasn’t without issues though, however I would probably talk about it next time. In order to figure out what each parameters meant, I spent some time watching through the talk given by the author @fulhack.
I wanted to try using virtuoso as the storage engine for Redland but unfortunately there is no librdf-storage-virtuoso package for Ubuntu. After getting some help from @dajobe, I attempted to build the packages myself. Although it takes quite some time to build packages, but not too difficult it seems.
This is the second part of the golang learning rant log. Previously on (note (code cslai)) I managed to make each line in the CSV into a hash map. So today I am going to make it into JSON Lines.
This post continued from this post. Finally I have found some time to start developing my pet project using Zend Framework. After getting the controller to work the way I am more familiar (comparing to Kohana which I used at work) with, the next step is to get it to output some data.