Notes on codes, projects and everything
While the previous file structure works well, I decided to tune some details before deploying the latest WordPress release. Besides that, I also started a new theme development project after my last theme which was developed more than 2 years ago. Thankfully, everything seems to work so far.
Semantic Web is not just about putting data on the web, but also making links to allow a person as well as a machine to explore the web of data. Links are made in the web of data connects arbitrary things together as described by RDF as opposed to links in the web of hypertext, where links connects to only web-resources. Linkage of arbitrary things then allow related things to be found while performing search.
As the name implies, Resource Definition Framework, or RDF in short, is a language to represent information about resources in world wide web. Information that can be represented is mostly metadata like title (assuming the resource is a web-page), author, last modified date etc. Besides representing resource that is network-accessible, it can be used to represent things that cannot be accessed through the network, as long as it can be identified using a URI.
I am currently preparing myself in applying a postgrad programme and is looking for a research topic. At first I wanted to do something that is related to cloud computing but after some discussion with people around me, they suggest me to do something on semantic web. While posting my notes here, I realized that I had posted something on semantic network that looks like the base of semantic web here (Post still “Under construction” as of writing, will post the diagrams later tonight).
I often struggle to get my Javascript code organized, and have tried numerous ways to do so. I have tried putting relevant code into classes and instantiate as needed, then abuse jQuery’s data()
method to store everything (from scalar values to functions and callbacks). Recently, after knowing (briefly) how a jQuery plugin should be written, it does greatly simplify my code.
array_map function is a function that I use the most in my php scripts recently. However, there are times where I want to pass some non-array into it, therefore often times I have code like the snippet shown below:
$result = array_map(
'some_callback',
array_fill(0, count($some_array), 'some_string'),
array_fill(0, count($some_array), 'some_other_string'),
$some_array
)
It doesn’t look good IMO, as it makes the code looks complicated. Hence, after seeing how the code may vary in all different scenarios, I created some functions to clean up the array_map call as seen above. Code snippet after the jump
Often times, I am dealing with JSONL files, though panda’s DataFrame is great (and blaze to certain extend), however it is offering too much for the job. Most of the received data is in the form of structured text and I do all sorts of work with them. For example checking for consistency, doing replace based on values of other columns, stripping whitespace etc.
My cloud storage is nearly exploding, and I am not in a good position to start subscribing for more storage (Yes, I am still #opentowork). Considering I just moved my domain settings to Cloudflare and started using Cloudflare tunnel, I figure I probably should just back up some of my photos, and host it on my workstation.
(more…)This is basically a small incremental update to my script published here. For some reason, the previous version of the script didn’t really work, so this release should fix the problem. Besides fixing the problem where the daemon did not actually launched at start up, I have added a settings applet for this script as well.
This post is purely based on my own speculation as there’s no experiment on real-life data to actually back the arguments. I am currently trying to document down a plan for my experiment(s) on recommender system (this reminds me that I have not release the Flickr data collection tool :/) and my supervisor advised to write a paragraph or two on some of the key things. Since he is not going to read it, so I might as well just post it here as a note.
So apparently Annoy is now splitting points by using the centroids of 2 means clustering. It is claimed that it provides better results for ANN search, however, how does this impact regression? Purely out of curiosity, I plugged a new point splitting function and generated a new set of points.
(more…)