Notes on codes, projects and everything
As the name implies, Resource Definition Framework, or RDF in short, is a language to represent information about resources in world wide web. Information that can be represented is mostly metadata like title (assuming the resource is a web-page), author, last modified date etc. Besides representing resource that is network-accessible, it can be used to represent things that cannot be accessed through the network, as long as it can be identified using a URI.
Another half a day spent on figuring out how to package my daemon properly, fortunately with help from friends over at #harmattan IRC channel as well as cckwes, I finally get the deb package generated properly. So just a quick reminder on what my daemon does, it is just a quick hack that toggles the ‘allow background connections’ on and off depending which kind of data network a user is connected to. Apparently I am not the only one who are looking for this, as a feature request was filed long long time ago.
Recently the term “Semantic Web” becomes extremely popular that Sitepoint blogs keep posting articles on this topic (1, 2). In my college days, I learned about Semantic Network and I wonder if there is some relationship between them. I’m not sure whether I get the concept correctly but in this article I would like to revise a bit on semantic network before going to semantic web. Please correct me if I’m wrong.
Sometimes I really doubt about the advantage of recycling old stuff to fund for new units beyond goodwill. Sure you get to convince yourself that you are saving the environment by doing so, and it also saves money in the long run. However, I didn’t realize how much it generates it may be after trying to work out an answer for a fictional IQ question.
This post is purely based on my own speculation as there’s no experiment on real-life data to actually back the arguments. I am currently trying to document down a plan for my experiment(s) on recommender system (this reminds me that I have not release the Flickr data collection tool :/) and my supervisor advised to write a paragraph or two on some of the key things. Since he is not going to read it, so I might as well just post it here as a note.
One of my recent tasks involving crawling a lot of geo-tagged data from a given service. The most recent one is crawling files containing a point cloud for a given location. So I began by observing the behavior in the browser. After exporting the list of HTTP requests involved in loading the application, I noticed there are a lot of requests fetching resources with a common