Notes on codes, projects and everything
Recently I volunteered in building a site that reports whether certain websites are blocked locally (please don’t ask why that is happening). As it is a very simple app reporting status I wanted it to be easily scrape-able. One of the decision made was I want it to have things to see on first load, this practically removes the possibility of using react, which is my current favorite.
Implementing a Information Retrieval system is a fun thing to do. However, doing it efficiently is not (at least to me). So my first few attempts didn’t really end well (mostly uses just Go/golang with some bash tricks here and there, with or without a database). Then I jumped back to Python, which I am more familiar with and was very surprised with all the options available. So I started with Pandas and Scikit-learn combo.
Sometimes I really doubt about the advantage of recycling old stuff to fund for new units beyond goodwill. Sure you get to convince yourself that you are saving the environment by doing so, and it also saves money in the long run. However, I didn’t realize how much it generates it may be after trying to work out an answer for a fictional IQ question.
I came across a video on Youtube on Pi day. Coincidently it was about estimating the value of Pi produced by Matt Parker aka standupmaths. While I am not quite interested in knowing the best way to estimate Pi, I am quite interested in the algorithm he showed in the video however. Specifically, I am interested to find out how easy it is to implement in Python.
It is very much expected that there will be endless stream of new (and often times better) tools introduced to solve the same set of problems. While I am slowly resuming my programming work, and in the process of reviving my very much dead postgrad project, I found some alternative to the tools I had used in the past. I suppose I shall just jot them down here so that there’s a reference for later use.