Development environment setup for Lisp

I was reading on Lisp as a development for a very long time (Paul Graham’s writing definitely played a factor, and also a couple of other things). I even did start on Land of Lisp but never could finish it, and Lisp fell off the radar.

I wanted to get back to this (more later) and had to setup a dev environment that would help me learn more. Even though clisp was an option, I didn’t want to rely on gedit and clisp on the command line. That didn’t feel right.

So, here is the account of how I went about setting up Lisp on a Linux Mint (Ubuntu would do just fine). If you are looking for instructions for Windows, most of these might work – YMMV.

  1. Based on this discussion on SO, I decided to use the ClozureCL. Even though SBCL and ClzureCL seem to be able to generate native binaries, I thought that the native support of threads is something that will be helpful in the longer run. Ideally though, it would have been nicer if there is no difference between the various LISP flavors
  2. The next step was to download the CCL on the Linux machine. Again, I preferred setting up everything on a Linux machine, just to avoid myself the pain of getting things working. The main idea is to write the Lisp code, not worry about the setup (though the latter took a lot of time too)
  3. Download the CCL from Clozure FTP  – got the 64bit version, since I was running a Linux Mint Nadia distro on a laptop and Linux Mint Maya on a VM.
  4. The installation required unzipping the file and that was very simple. Another change I did was to the $CCLDIR/scripts/ccl and $CCLDIR/scripts/ccl64 files, providing the directory for the CCL_DEFAULT_DIRECTORY
  5. Now this was the CCL setup and the next thing was – well, setting up the IDE. After a searching for a while, realized that the best suggested IDE seemed to be emacs. I have been trying to get my head around emacs (have been a intermediate vim user for a while, though haven’t done much for a while now). So, I thought, well, may be this is a good time for me to start on emacs
  6. emacs was already installed on my machine, so I went through the in-built tutorial. Again, based on comments, this seemed to be the best way of learning emacs. After crossing around 35% of the tutorial, I felt fairly ok to *start* on using emacs. Mind you, at this stage, I knew to navigate, a bit of yank and kill, and some keys that I remembered. Nothing fancy
  7. Then I tried to ‘execute’ the lisp code. The first function was rather simple (defun myfn() (+ 2 3)) . When I tried to do this from the standard emacs, I couldn’t figure out how this could be done. The standard keys of M-x run-lisp didn’t work straight out of the box since I was not using clisp and the lisp was not installed in the default location. Trying to modify the ~/.emacs file by trying something like this –  didn’t help either. The inferior-lisp-program variable didn’t help it – the ccl wouldn’t execute. Or I would see warnings of swank that I didn’t understand.
  8. At this stage, I was very frustrated and thought that SLIME might be the only option out. Checking this  from SLIME documentation, made me think that SLIME might be the one I need to check.
  9. So, next thing, installing SLIME. This meant, installing SLIME and configuring it for CCL Clozure’s trac  is the location for the instructions.
  10. First download the beta-version of QuickLisp. QuickLisp is the apt-get / Maven / pypi / CPAN of Lisp. So, next thing to do, first download the quicklisp.lisp file curl -O
  11. Then load the quicklisp.lisp into CCL. Using ccl64 -l quicklisp.lisp
  12. Then call the install method (quicklisp-quickstart:install)
  13. This will download the required files and a directory is created here ~/quicklisp with the required modules
  14. Then (ql:quickload :quicklisp-slime-helper) on the same CCL session. This will install the SLIME modules for the CCL
  15. Then the ~/.emacs file has to be changed as mentioned in the I provided the location of the ccl64 script file for the inferior-lisp-program variable.Also, instead of providing the location for the SLIME installation, provide the location of the slime-helper.el ;;(add-to-list ‘load-path “~/code/slime/”) ;or wherever you put it (load (expand-file-name "~/quicklisp/slime-helper.el")) (setq inferior-lisp-program "~/code/tools/ccl/scripts/ccl64 -K utf-8" ) More information on configuring SLIME is on SO
  16. Then the next step is to load SLIME. Before that, open the file that you want to compile. C-x C-f ~/code/test1.lisp
  17. Then load SLIME. M-x slime. If all is well, you should be prompted with a CL-USER> REPL prompt. You can test that the CCL is working by doing some simple function calls (defun myfn() (+ 2 3)) and calling this function (myfn), should do the trick.
  18.  Then the next thing to do is to be able to ‘compile’ the existing files loaded in the editor. On the file buffer, you can compile the file by typing C-c C-k. If the file doesn’t have any errors, you will see a message ‘compilation finished’
  19. The nice bit of this is that the SLIME gets the latest definitions if the file has been modified. So, you can switch to the SLIME buffer, C-x b, will switch to the last used buffer and you can call the function name to see that the new changes are picked up or not.
  20. And btw, to exit the lisp REPl, the function to use is (quit)

Hopefully the above instructions help you setup emacs, Clozure CL, SLIME and get you started on writing some Lisp code. I am still learning emacs, and a tip here – start with the inbuilt tutorial. It is a bit long, but it will be worth the time. I will publish my cheat-code for the emacs keys in the future some time.

Calculate powerset of a set

This post is an attempt to understand how the execution for the calculation of a PowerSet is done for a given list of input. Note that I said calculation, since this is a set calculation technically. The idea comes from this link. The Haskell code tries to mirror the algorithm given in the Wiki.

The basic idea of the algorithm is this – take one element from the set and concatenate (perform union in Mathematical terms) this element with the powerset of the remaining elements. And to calculate the powerset of the remaining elements, continue the same process recursively. And the powerset of an empty list [] is a list with an empty element [[]], which is the terminating condition for the recursion. The Haskell code on the site is simple looking, but packs a good punch. map in Haskell is like map in any other language – it applies the function given to every element in the list. In the code below the (x:) means concatenate x in the beginning. And the ++ operator is to add an element to the end of a list

powerset :: [a] -> [[a]]
powerset [] = [[]]
powerset (x:xs) = powerset xs ++ map (x:) (powerset xs)

Let us walk through the execution of the recursive algorithm. For the sake of brevity, I am going to name the function ps

ps [1,2,3] = ps [2,3] ++ map (1:) (ps [2,3])
                     |--- (1)
ps [2,3] = ps [3] ++ map (2:) (ps [3])
             |-- (2)
ps [3] = ps [] ++ map (3:) (ps [])
           |-- (3)
= [[]] ++ map (3:) (ps [])
= [[]] ++ map (3:) ([[]])
            |--- (5)
= [[]] ++ [[3]]
= [[],[3]]
  --- so, the result of (2) is ps [3] = [[],[3]] -- (Result1)

Now we come back to the ps[2,3]. So,
ps [2,3] = [[],[3]] ++ map (2:) (ps [3])
=> ps [2,3] = [[],[3]] ++ map (2:) ([],[3]) -- from Result1
=> ps [2,3] = [[],[3]] ++ [[2],[2,3]]
=> ps [2,3] = [[],[3],[2],[2,3]] -- (Result2)

Now we come back to ps [1,2,3]
As noted above,
ps [1,2,3] = ps [2,3] ++ map (1:) (ps [2,3])
=> ps [1,2,3] = [[],[3],[2],[2,3]] ++ map (1:) (ps [2,3])
     -- from (Result2) we know ps[2,3]
=> ps[1,2,3]=[[],[3],[2],[2,3]] ++ map (1:) ([],[3],[2],[2,3])
=> ps[1,2,3]=[[],[3],[2],[2,3]] ++ [[1],[1,3],[1,2],[1,2,3]
=> ps[1,2,3]=[[],[3],[2],[2,3],[1],[1,3],[1,2],[1,2,3]]

I have reused the results mentioned in Result1 and Result2, but during execution, those results are re-computed ( I am not 100% sure on this, since there is a possibility of partial evaluation and storing the results of the partial evaluation in Haskell)

Caesar cipher in one line

Can we create an encrypt and decrypt using Caesar cipher in one line? I’m sure you can. Here is one way to do it in Python. This assumes the text is typed in English (or where the lower() function on a string can work).
import sys;print ''.join([chr(x) for x in [x if x else x+ord('a') for x in [ (ord(x.lower())+1)%(ord('z')+1) for x in sys.argv[1]]]])
'import sys;print '.join([chr(x) for x in [x if (x%(ord('a')-1)) else ord('z') for x in [ (ord(x)-1) for x in sys.argv[1]]]])
So, now you can send your next missive in shift+1 cipher ! Of course, you are not going to send trade secrets that way, would you? 🙂

Install Python-MySql on Windows 7 64-bit

Install Python-MySQL on Windows 7 64-bit

I’ve been wanting to try my hand at SQLAlchemy for a while now. No particular reason other than checking how the APIs for the ORM are. To decide on the database to use, I thought MySQL might be a good idea because most of the OSS libraries support MySQL. But the catch was that I wanted to do all this on Windows 7 and that too a 64-bit Windows 7. So, this was a bit tricky. I couldn’t do something like easy_install MySQL-Python on Windows. So, here is a step-by step guide to install MySQL 64-bit, Python-MySQL on Windows 7. The steps might work with Windows Vista too.


  1. Python 2.7 – might not work with Python 3
  2. Visual Studio 2008. The Visual Studio express edition will also work. Essentially, a C++ compiler is required
  3. Access to modify the registry
  4. MySQL DB 5.5 on Windows

Installing Python-MySQL

  1. Download the combined installer of the MySQLDB on Windows from MySQL. The current version is 5.5
  2. Download the Python-MySQLDB library here – Python-MySQLDB. The current version is – MySQL-python-1.2.3
  3. You need to have the Microsoft Visual Studio installed on your machine. I had Visual Studio 2008 installed, but this can work with the Microsoft Visual Studio Express edition too
  4. Install the MySQLDB – select the ‘developer’ configuration during install. This will install the required libraries and the header files for the C connector- these are important. Note the directory that you are installing the MySQL
  5. This will install MySQL 64-bit on the machine. I am running Windows 7 64-bit and hence I installed the 64-bit version of MySQL
  6. Extract the MySQL-python-1.2.3 into a directory
  7. Open the site.cfg and make the following modification
    -registry_key = SOFTWARE\MySQL AB\MySQL Server 5.0
    +registry_key = SOFTWARE\Wow6432Node\MySQL AB\MySQL Server 5.5

    Based on the version of the MySQL server, change the 5.5 to whatever is the version you installed. On Windows 7 (and I think on Vista too) 64-bit, the registry key is changed to this location during installation. This took me a long time to find. This has been documented here too – MySQL-Python forum (check for the comment no. 13)
  8. Then modify the by adding the lib_dirs and include_dirs. Here we add the directories for the C connector that was installed as part of the installation of MySQL. I could not locate the registry key for the connector, so I added the directories for the headers and the libraries to the compiler parameters. Note that I added the opt-imized library directory. If you are debugging the MySQL connector, you will want to include the debug version of the libraries
    library_dirs = [ os.path.join(mysql_root, r'lib\opt'), "C:\Program Files\MySQL\Connector C 6.0.2\lib\opt" ]
    include_dirs = [ os.path.join(mysql_root, r'include'), "C:\Program Files\MySQL\Connector C 6.0.2\include" ]
  9. One final step and you are ready to go. As documented – here, you need to modify the Look for the line containing
    ld_args.append('/MANIFESTFILE:' + temp_manifest) and add a new line – ld_args.append('/MANIFEST')
  10. Now install the python library using – python install in the MySQL-python-1.2.3
  11. This will use the Visual Studio compiler and the file mentioned in the include_dirs, library_dirs to build the .egg file for the MySQL-Python libraries
  12. Some more help here
  13. Also can check this on SO though for the 64-bit windows, I found that this solution did not work

Export all the RSS feeds in Opera

I wanted to take a backup of my Opera profile including the RSS feeds. I find it convenient to read the feeds from Opera instead of an external viewer. When I used the standard export feeds option (File –> Import and Export –> Export Feed List), I realised that only those feeds that I am currently subscribed to are the ones that got exported. Aaah, that is a problem ! I have a lot of feeds in my feeds-list and I keep turning on and off a few of them as and when I feel I can’t keep up with them (sigh, the problem of over-information !)

I searched a bit to find out if there was any flag I am missing in the configuration. I didn’t find any. So, I thought, why not write a small script to do this for me. The first choice of language was python, but then I thought, naa, let me try something groovy 😀

Simple steps to do achieve this

  1. Find out where and how the feeds are currently stored in Opera
  2. If it is a standard format (which mostly it would be), find a library in Java that will parse that
  3. Convert that into an OPML file – again using some existing Java library

The answers for the above

  1. The feeds are stored in a .ini file (yaaay ! +1 for the Opera team for picking a simple format). They are stored in the mail directory (you can find that using about:opera and checking the mail directory). In the mail directory, the feeds are stored in a file named incoming1.txt (or any number of incoming.txt) files
    1. The format is simple. Each feed has a name, URL and whether you have subscribed to it or not in its own section. Here is an example
        [Feed 95]
        Name=good coders code, great reuse
        Update Frequency=3600
  2. Following this suggestion, I started with ini4j, but soon realised that it couldn’t handle UTF-8 files. And the feeds file is an UTF-8 file. So, I decided to checkout apache commons. The main library required for this is the commons-configuration, but this requires a bunch of dependencies
    1. Commons-collections
    2. commons-lang
    3. commons-logging
  3. And the exported file is an OPML file, specifically an OPML 1.0 file. The library of choice seems to be ROME for this. And ROME also has an OPML output generator too. Sweet ! The dependencies for this are
    1. rome-1.0.jar (the ROME library)
    2. opml-0.1.jar (the OPML generator)
    3. jdom.jar (a dependency for ROME)

I must say, I was a little annoyed with the list of dependencies, but then that is the Java world, so I shrugged my shoulders and continued.

The IniToOPML.groovy file exports the ini file to an opml 1.0 file. I couldn’t figure out why the <head></head> node was not getting generated, but for now, this seems to work fine.

I am not comfortable with Maven yet to write a pom.xml to automatically download the required packages for the binary. So, for now, it is source only (may be this way I will learn Maven !).

JSON support to Kiva .NET toolkit

JSON support for the Kiva .NET library

If you are interested in microfinance, then it is very likely that you’d know about Kiva. And if you know about Kiva, then it is possible that you might know that they have provided public access to their data via web-services. And once they did it, there were toolkits available in most of the languages.

One of the implementation that caught my interest was the Kiva .NET. I never worked on C# earlier, so I thought this might be a good project to start with. The reasons

  1. It’s a toolkit – so it is compact
  2. There is already an implementation for XML that was done by the original author
  3. I wanted to write some code using LINQ, even if it was not working off DB data sets

So, given the above, I added support for JSON data in the Kiva.NET library (can’t locate the link where the Kiva folks mentioned that they would release the JSON format first and then the XML for the API). Nevertheless, the JSON format support has been added.

Most importantly – this code is seriously beta. The reasons are

  1. The code is not backward compatible with the existing Kiva.NET implementation. Almost all the objects have changed with
    1. Attribute names (members of the class) have changed to make it possible to deserialize the JSON data (more on this further)
    2. For some objects, as the data returned by Kiva has changed, the object layout also had to change
  2. Exception handling is not done
  3. I did not check all the APIs to see if the XML implementation is full-featured as the JSON one
  4. Productizing the API is not done i.e.
    1. Versioning of the library
    2. Proper comments and documentation
    3. Test suite for the toolkit

But the code does work :). In the zip file attached, the KivaTest project has a very simple main() to test each of the APIs. Also, this requires atleast JSON.NET 3.5R5 as it has support for non-public default constructors. Thanks to James for adding this (you can follow the discussion here).

Coming to the point about the attribute names being changed. The reason this was needed because I was deserializing the JSON object using and the library tries to match the attribute names with the JSON data. As mentioned in the Kiva API specifications, the variable names are separated with an underscore. The initial implementation of the Kiva .NET library didn’t use underscores for the attribute names. That is why my changes are not backward compatible. I did check the resolver interface in, but that is used to piddle with the attribute names of the objects but not with the JSON data. What I am looking for is to modify the JSON attributes so that I can remove the underscores for the attribute names before deseralizing them into Kiva objects. I am certain it must be possible, just that I am a n00b when it comes to C#, so will dig more and find out.

Will have more updates in the coming days based on how the owner of the library reviews my changes ! The patch file and the zip file for the changes are here

The patch file and the source zip with the test client are uploaded as part of the post.

Missing AsyncGetResponse in WebRequest

Disclaimer: I am not well versed with the various cool features that the .NET programming framework provides. So, when someone says, oh well, I wrapped it into a delegate and passed it into another, I am not sure if they mean they just created an anonymous function and passed it to another or if it is something more than a lambda expression.

So, when I was trying the code that Luca Bolognese showed on the tutorial for F# and uses WebRequest.AsyncGetResponse, wrapping it under an async, to demonstrate how easy it is to run code in asynchronous mode, and in parallel, I didn’t know where to find the AsyncGetResponse. When I tried the intellisense, all it showed was WebRequest.GetResponse. I knew I was missing some library, but I was not sure about it. Not sure, till I saw the post by Nick Hodge wherein he talks about the Microsoft Parallel Extensions to .NET Framework. That is when I thought, may be these parallel extensions are the ones that will auto-magically wrap the GetResponse into an AsyncGetResponse.

So, I download the CTP version, and when I wrap the ticker data retrieval code in an async block, I see the WebRequest.AsyncGetResponse() in the intellisense popping up. Long story short – you need the parallel extensions to the .NET framework 3.5 to be able to use WebRequest.AsyncGetResponse.

Steve Vinoski’s interview (about CORBA among other things !)

Steve Vinoski’s interview at InfoQ has some interesting thoughts about CORBA and its future. Even if you don’t use CORBA it is still worth watching the interview to understand what are the possibile ways one can try to design a distributed enterprise class application.

Here is a list of interesting points that Steve makes in his interview:

  1. If you are building a large enterprise scale distributed system, he says he’d prefer using REST over CORBA
  2. Suggests looking Erlang for concurrency and middleware applications (I’ve read a lot about how well Erlang scales when compared to most of the other dynamic languages). Python’s threads are not really that scalable given the GIL, don’t know anything about Ruby.
  3. From his interview, he seems to suggest that CORBA is now relegated to integrate with older systems which have been built using CORBA – which is probably true as I don’t see much being published / talked about CORBA in the developer community. It might be a fad, and the true designers who have seen it all might still prefer CORBA. If you are one of those, I’d be interested in knowing your thoughts when trying to design a maintainable, distributed application.
  4. He says that the idea of the IDL as an interface is not really true. And that, the REST verbs can pretty much replace what the IDL’s verbs (functions are). This I think is possibly an oversimplification of the IDL. However much fanciful REST verbs might look, they might not provide the richness of functions in a IDL (of course, IDLs have their own drawbacks). I think that equating them to REST verbs might be a stretch.

The usual language of choice question does bring an interesting point. Most of the application developers try to fit the language to the problem, rather than using the language that best suits for the problem. In such a case doing cross-language integration is not a simple task. If I were to write a MQ in Erlang with all the goodness the language provides, integrating it with a Java application (which seems to have become the language of choice for pretty much all the applications – from banking systems to rovers on Mars !) is not easy.

In such a case, are architects and developers left to fit the language to the problem ? Will attempting a cross-language middleware create another architecture like CORBA ? And the bigger problem is a cultural one. If the product I am building uses Java as the primary language, then how do I convince architects / developers that there are certain parts of this application which are best written in (say) Haskell / Erlang / Lua (or any other language) ? Does the question of consistency and purity of design come in the way of choosing the right tool for the job ?

wget links for the video files of MIT-OCW SMA 5503

MIT’s open courseware has a number of courses which are of interest across areas. Few of these courses not only have the lecture notes, but also the video lectures recorded in MIT. Providing these courses in the public domain is a great thing that MIT is doing. It definitely will interest people like me who like to know how certain courses are taught in one of the premier education institutions. One of the courses, I recently chanced upon is the Introduction to Algorithms (SMA 5503). This course has the lecture notes also uploaded to the site.

As the bandwidth at home is not very conducive for streaming video, I decided to download the files. I did find a .torrent on mininova for this course. The seeds though, were stuck at 20% – not the best of the situation. So, I decided to download the courses from the source. The FAQ mentions how one can download the courses from OCW. Basically, one removes akamai (which seems to host the files for lesser latency in transmission), and points the URL to OCW. Below is the list of the all the .rm files for download. The list is for the high-bandwidth videos (220k real media files) and the lecture notes. You can save the links in a .txt file and use something like wget to download them to the local drive. I used something like this for wget:

wget -N -c -i ./links.txt –limit-rate=15k &

The links.txt is the list of the links