Notes on codes, projects and everything
While JSON is a fine data-interchange format, however it does have some limitations. It is well-known for its simplicity, that even a non-programmer can easily compose a JSON file (but humanity will surprise you IRL). Therefore, it is found almost everywhere, from numerous web APIs, to geospatial data (GeoJSON), and even semantic web (RDF/JSON).
Previously, I started practising recursions by implementing a type check on lat (list of atoms), and ismember
(whether an atom is a member of a given lat). Then in the third chapter, named “Cons the Magnificent”, more list manipulation methods are being introduced.
Recently I find some of my pet projects share a common pattern, they all are based on some kind of grids. So I find myself writing similar piece of code over and over again. While re-inventing wheels is quite fun, especially when you learn new way of getting things done with every iteration, it is actually quite tedious after a while.
In the last part, I implemented a couple of primitive functions so that they can be applied in the following chapters. The second chapter of the book, is titled “Do it again, and again, and again…”. The title already hints that readers will deal with repetitions throughout the chapter.
I saw this article from alistapart, which is about Javascript’s prototypal object orientation. So the article mentioned Douglas Crawford, and I was immediately reminded about my struggle in understanding the language itself. Back then I used to also refer to his site for a lot of notes in Javascript. So I went back to have a quick read, and found this article that discusses the similarity between Javascript and Lisp.
Sometimes, letting a piece of code evolving by itself without much planning does not usually end well. However I was quite pleased with a by-product of it and I am currently formalizing it. So the by-product is some sort of DSL for a rule engine that I implemented to process records. It started as some lambda functions in Python but eventually becomes something else.
So my cheat with dask worked fine and dandy, until I started inspecting the output (which was to be used as an input for another script). While the script seemed to work fine, however when I started to parse each line I was hit with some funny syntax errors. After some quick inspection I found some of the lines was not printed completely.
Often times, I am dealing with JSONL files, though panda’s DataFrame is great (and blaze to certain extend), however it is offering too much for the job. Most of the received data is in the form of structured text and I do all sorts of work with them. For example checking for consistency, doing replace based on values of other columns, stripping whitespace etc.
I came across a video on Youtube on Pi day. Coincidently it was about estimating the value of Pi produced by Matt Parker aka standupmaths. While I am not quite interested in knowing the best way to estimate Pi, I am quite interested in the algorithm he showed in the video however. Specifically, I am interested to find out how easy it is to implement in Python.
Traversing a tree structure often involves writing a recursive function. However, Python isn’t the best language for this purpose. Therefore I started flattening the tree into a key-value dictonary structure. Logically it is still a tree, but it is physically stored as a dictionary. Therefore it is now easier to write a simple loop to traverse it.
Long long time ago when I was working with Prolog, I was introduced to list. Unlike arrays in most popular programming languages, we weren’t really able to access to a particular member directly. Every list is constructed in a chain-like structure.
Sometimes I really doubt about the advantage of recycling old stuff to fund for new units beyond goodwill. Sure you get to convince yourself that you are saving the environment by doing so, and it also saves money in the long run. However, I didn’t realize how much it generates it may be after trying to work out an answer for a fictional IQ question.
One of my recent tasks involving crawling a lot of geo-tagged data from a given service. The most recent one is crawling files containing a point cloud for a given location. So I began by observing the behavior in the browser. After exporting the list of HTTP requests involved in loading the application, I noticed there are a lot of requests fetching resources with a common rXXX
pattern.
I just failed a programming assessment test miserably yesterday and thought I should at least document it down. However, the problem with this is that the questions are copyrighted, so I guess I would write it from another point of view. So the main reason I failed was because I chose the wrong strategy to the problem, thinking it should be solution but as I put in time to that I ended up creating more problems.
Recently I am being assigned to work on a FSM based workflow system to enable my other colleagues to use it in their application. I am rather impressed with the simplicity of the workflow system (initially designed by my Technical Director) and decided to post some notes here. The workflow system was developed for CodeIgniter PHP framework and Drupal CMS.