Notes on codes, projects and everything
Implementing a Information Retrieval system is a fun thing to do. However, doing it efficiently is not (at least to me). So my first few attempts didn’t really end well (mostly uses just Go/golang with some bash tricks here and there, with or without a database). Then I jumped back to Python, which I am more familiar with and was very surprised with all the options available. So I started with Pandas and Scikit-learn combo.
I have just re-started to find myself a job as my work in mybloggercon almost come to an end (after helping them to set up an April Fool Prank). I have sent some enquiry letters to apply for a job in web-development field mostly involves PHP. I prefer PHP over ASP.NET because I can have greater flexibilities in developing in PHP as what I experienced when I was developing my final year project.
Call me a cheapskate, as I still have not subscribe to a mobile data plan after purchasing my second smartphone, namely Nokia N9. There’s this ‘allow background connections’ option but it doesn’t care whether the connected network is a WLAN network or mobile data network. After finding out that Nokia has no interest in creating another separate option so that each type of network has their respective ‘allow background connections’ switch, I decided to make one for my own.
After a miserable trip back to academic world, I finally re-gained the courage to get back to job-market. For the time spent in university, I spent quite some time reading about Semantic Web and RDF. Then I thought, I should have published more in this format in future. However, that didn’t really happen, mostly because I am too lazy.
A new day, and a new post on job application. So this time instead of asking a snippet, I was actually asked to deliver some sort of a full application. Not sure why this was required, but I had fun creating them nonetheless. Though I would say I am not really a fan of creating visual stuff though (oh the crappy animation nearly killed me).
One of my recent tasks involving crawling a lot of geo-tagged data from a given service. The most recent one is crawling files containing a point cloud for a given location. So I began by observing the behavior in the browser. After exporting the list of HTTP requests involved in loading the application, I noticed there are a lot of requests fetching resources with a common rXXX
pattern.