Thursday, November 29, 2007

ooPSLA 2007 - part 4a

ooPSLA 2007 in Montreal, Canada - Oct. 21 to Oct. 25, 2007
(this is the part 4a of my braindump on ooPSLA 2007. I decided to split part 4 into two parts since I generally like reading blog posts that fit into one screen page of my laptop and imagine a majority of blog readers feel the same. Please see part 1, part 2, and part 3)



In addition to various paper sessions and one big poster session, like most week-long conferences, ooPLSA includes a wide variety of workshops, co-located symposiums, and tutorials. Unlike many other conferences, these secondary sessions are attended and instructed by the leaders in the field and many times by the progenitors of the technologies and ideas.

These other sessions attract industry participants and thus create a good mix of attendees. For instance, this year, while waiting in line to get lunch, I met developers from one of the big US Banks and they swear by ooPSLA and one has attended the past five years. He had managed to 'infect' other colleagues and they are now a group of three attending this year.

Workshops, symposium, and tutorial

DSM/DSL workshop
Since my recent research centers around using the Ruby on Rails platform to create a domain-specific language (DSL) for Web APIs mashups, I had to attend the domain-specific modeling/language workshop at ooPSLA. I did not have a chance to submit a workshop paper so I attended as a regular participant. Overall, the workshop spanned three days, which I think is a bit much; I only attended the first day. The AM session papers were a mixed of incomplete DSL and approaches to create applications using DSL and DSM. After a nice lunch near old Montreal, where I got to have lunch with some of the organizers, we reconvene for the PM sessions which were deeper and also seemed more mature, as there was a variety of demos and more in-depth discussions.

Overall, I thought the majority of the papers dealt with visual DSLs, that is environments that encourage or facilitate application constructions with visual representations of the DSL. While I realize that there is a growing class of users (non-programmer types) who need to create applications and that one can target with such visual DSLs, I also convinced (based on experience) that visual languages are better suited for very narrow problems and do not scale well. My experience using VisualAge C++ (circa 1996) and then VisualAge Java (circa 1997) have left bitter tastes in my mouth when it comes to visual programming environment. I was able to build a complex application that we shipped to our customers, however, because the application was non-trivial, only a handful of developers could modify and debug it. My conclusion at the time was that the visual code can be harder than textual code. I did not see anything from the presentations at the workshop that convinced me that a significant breakthrough was achieved. I would agree that for some subset of problems and users, a visual environment is very productive and attractive, however, I think it's best to layer the visual tools on top of a textual DSL or a platform supporting a textual language, e.g., Ruby, Rails, Python, Java, or C#

LINQ tutorial

The one tutorial I attended this year was by Eric Mejyer of Microsoft on LINQ. First off, I only attended a part of the tutorial, it was scheduled as an all day affair on the first day of the conference and I was at the DSM/DSL session in the AM. I had met Eric the night before at the bar of the hotel and we had various interesting conversations along with other folks (e.g., Martin Fowler) on many topics. I knew Eric was giving the tutorial and told him I would come for at least part of it. The one thing I must mention right from the start is that Eric is a truly cool dude. I rarely can say this of Microsoft colleagues I have met, but Eric is just cool. He's a real asset to MS, not just for his many contributions, but also overall attitude that debunks the MS engineer stereotype, i.e., "the typically arrogant, I know most everything and the world runs on MS platform" types.
(Photo credit: self with iPhone of Eric Mejyer of Microsoft at the Hyatt Montreal hotel lobby)

Microsoft's Language INtegrated Query (LINQ) has been in research in development for a while now. In the recent version of Visual Studio 2008 and the C# 3.0 language, LINQ is now a prominent feature. In a nutshell LINQ adds new syntax, semantics, and libraries to the C# language (and really the .NET framework) to support direct query and manipulation of relational data. In other words, using LINQ in your C# and other .NET enabled languages, you can do simple to very complex queries of relational data or other data structures directly in your code. For instance this is some LINQ C# code to do some simple queries:

//Assume customers is a C# (or .NET) collection of Customers
//this can be in memory populated or loaded from a relational DB
Customers[] female_customers = from c in customers
where c.sex.ToLower() == "female"
|| c.sex.ToLower() == 'f'
orderby c.age
select c;

//Another example
string[] cities = {"San Jose, CA", "New York, NY",
"San Francisco, CA", "Miami, FL",
"Raleigh, NC", "Portland, OR",
"Seattle, WA"};
string[] cities_in_california = from city in cities
where city.EndsWith("CA")
select city;

(you should easily be able to infer the SQL equivalent, though the EndsWith() and ToLower() selection operators maybe slightly tricky...)

Software Engineering Radio is running a blog episode 72: Erik Meijer on LINQ where he explains the basic. MSDN host a dated but still interesting video of Anders Hejlsberg discussing LINQ and C#.

However, note that unlike in various DB drivers and language wrappers (e.g., JDBC) or even ORMs (e.g., Rails' ActiveRecord) you never deal with SQL and all results and intermediary variables are C# objects and classes. MS added various new additions to C# and .NET framework to make that happen. From my limited understanding of LINQ (based solely on taking half of Eric's class), it appears that under the cover, the LINQ queries are translated into calls to the .NET LINQ libraries which are heavily templated C# classes to represent various generic version of queries. I am guessing that the same magic occurs for regular data structures when you use them in LINQ queries. Under the covers, I am also guessing that the LINQ .NET engine does appropriate "magic" and generate optimized SQL for the database. Naturally, these optimization will depend on what database you have and not simply the DB driver you installed. Therefore, I would venture to guess that on MS SQL this will work well and dandy but your mileage will likely vary if you use other DBs, e.g., MySQL, DB2, or Oracle. Since I am not sure 100% on previous statement, please don't quote me on that.

Generally, I am a firm believer in having a uniform representation of all parts of a software system. That is, reduce the impedance mismatch that occurs as one integrates different parts of a system. For instance, in typical Web applications development, the application logic is in some language and framework (e.g., Java or C#) but the user interface is coded in HTML, CSS, and JavaScript. Same also for the data when in a relational database. The various frameworks have facilities to reduce the impedance mismatch, however, for databases, even the best frameworks (e.g., Ruby on Rails' ActiveRecord) show their weaknesses when you need to do any moderately complex queries (e.g., a query that involves a couple of joins). What typically occurs is that the database binding layer will simply have a pass-thru that allows SQL to be directly submitted. The promise and beauty of LINQ is that the impedance mismatch is almost completely removed since you only deal with the language in question, say C#, and can represent all data elements and queries thereof directly in the language.

This is all good stuff. However, enter some dose of skepticism. I already identified the first problem and that is using MS .NET and LINQ, I wonder how much that ties you to MSQL since for any decently complex DB application you'll start needing the query optimizer and if that is not ported to work with the LINQ engine then forget anything but a complete MS solution.

The second issue I could see is one I had discussed with Eric. He has not convinced me otherwise. It is simply that there is potential for abuse by developers using LINQ. What I mean is that the relational database community has spent a number of years exploring and addressing the limits of the relational model such that it is now well understood and can scale really well. These advancement in great part have enabled the current Web commerce boom we have seen in the past 10 to 15 years. Show me one decently large commerce or Web application and I will point you to an example deployment of relational database. By bypassing SQL with a language that is not a proper subset or equivalently researched you may loose the benefits aforementioned.

Finally, since as far as I know, LINQ is not a standard or submitted to a standard body, though I know that there is an effort to create a Java version named jLINQ, I worry how much longer this will remains a Microsoft-only technology. So I realize that MS invented it but that does not mean it should not be a proposed standard which could allow it to be added to various languages and importantly allow various DB vendor to fully support it in various settings... I am likely dreaming a bit here, however, imagine if IBM and others had not created the SQL standards. What would have come of the database market? What about all the nice advances mentioned earlier?

Dynamic Language Symposium


I attended part of the DLS this year, as I had ooPLSA 2005. It's an interesting bunch since many of the originators of early OO languages are in attendance, e.g., Dave Ungar (visiting researcher at IBM Research), Mark Miller (Google), Jim Hugunin (Microsoft), and many others.

The talks I attended varied from very practical to esoteric. For instance, Jim Hugunin had a very interesting and practical talk on the new .NET dynamic language runtime (DLR) which allows .NET to host (relatively easily) various dynamic languages. Jim was the one to give us JPython (now Jython) so it was reasonable for him to implement Python on the DLR and share some of the results. Cool thing is that Python works and performs well on .NET---without a thorough study, he mentioned it was twice as good as Jython on one Java VM. He also mentioned that there was effort to also move JavaScript and Ruby to the DLR. Interestingly as in any ports, there are subtle issues that come up, especially how to integrate the .NET library and support it in the language. Not sure if MS plans to share some of the DLR spec or ideas with the rest of the world, however, this may be a glimpse into the future... A portable, solid, and performing virtual machine that can host efficiently various dynamic languages. (The JRuby folks are thinking along that line as well, see Headnius blog on the topic). Kudos to Microsoft for advancing this agenda though at the same time it worries me since I doubt this will ever fully work on Linux or the Mac. One esoteric talk was RPython which essentially allowed Python code to be typed... weird though I can see the statically typed heads inflating and smiling.
(Photos credit: self with iPhone of Dave Ungar (creator of Self) of IBM Research and Mark Miller of Google asking questions during keynotes)

Sunday, November 18, 2007

Why I love Ruby? Reason 1: because strings change and when manipulated should stay strings and be readable


Why strings are critical
Many modern languages come built in with string libraries or better with strings as a first class concept. Arguably, languages that have not done so have suffered significantly from that shortage; case in point C++ which, in the early 1990's, saw a variety of string libraries from vendors fighting for dominance, while the language itself became less relevant and surpassed by better alternatives.

The issue may on the surface appear to be unimportant, however, in practice having excellent string support in a language is hugely critical. Most tasks involve some type of string manipulation or usage, e.g., file input and output, strings as keys to hashes, and for user interface, just to name a few. This simply implies that a language that facilitates strings manipulations results in a real productivity boosts for programmers. In the case of Ruby, as I hope to convince you, the string support is so comprehensive and advanced that the common string tasks become a breeze, but also have a nice side effect of keeping the code rather readable.

Basics
The Ruby designers have chosen to make strings a first class type and the language includes a comprehensive library supporting strings. Unlike some other languages, strings in Ruby are mutable. This means that strings, when manipulated, do change and do not result in unique copies---Ruby has the concept and basic type of symbols that can be used for that that purpose. This decision has interesting consequences, though let's look into the basic string type usage and implementation in Ruby. The following Ruby code snippets illustrate the usage:

s = "this is a string" # creates a string
s += ', and this is another' # shows that strings are created with double or single quotes

The built-in String class contains tons of methods and also includes various modules (more on that in a future WILR Reason) that provides enumeration, comparison, and partitioning. Some basic methods:

s.capitalize # => "This is a string" as a new string
s.capitalize! # same result but modifies the receiver

s.upcase # => "THIS IS A STRING" (a new string)
s.downcase # => "this is a string" (a new string)

s.split # => ['This', 'is', 'a', 'string'] as an array of the elements
s.split('is a') # => ['This is ', ' string']

s.gsub! 'is ', '' # => "Tha string"

NOTE: s is modified to new value, Ruby idiom to use ! for methods with side effects

Advance manipulations

s.include? 'is' # => true
s.include? 'Is' # => false

s.insert 9, 'nother' # => "This is another string"
s.replace 'string' # => "string" (modifies string s)
s.center 20, '_' # => "_______string_______"
s.squeeze # => "_string_"
s.crypt 'password' # => "paHjoO.AYUKRQ" (one-way cryptography)

There are also various methods for matching with regular expressions, e.g., String#match, String#grep, and String#scan . I'll discuss them in a future WILR Reason since regex support is one of the reasons :-)

Features
Perhaps the most interesting and powerful aspect of Ruby strings are their support for having inline statements and the ease in which multiline strings can be defined and created.

a_string = 'cool'
"this is a #{a_string} string".capitalize! # => "This is a cool string"

x, y = 22.0, 7.0
"#{x}/#{y} = #{x/y} is an overestimate approximation for Pi which is closer to #{Math::PI}" # => "22.0/7.0 = 3.14285714285714 is an overestimate approximation for Pi which is closer to 3.14159265358979"

Creating multilines strings is as simple as:

s1 = %{Ruby on Rails
A clean, agile, and powerful Web framework
Created by David H. Hanson at 37signals}

s2 = <<END_STRING
Ruby on Rails
A clean, agile, and powerful Web framework
Created by David H. Hanson at 37signals
END_STRING

s1 == s2 # => false since s1 does not include the last newline
s1 += "\n"
s1 == s2 # => true

Final thoughts
As you can see, Ruby's String support is comprehensive and very easy to learn. The String class comes also built in with enumeration and comparable features. These aspects allow strings to be used like any other enumerable data types (e.g., arrays and lists) and also be compared for sorting and other comparison-based algorithms.

Because strings are mutable implies that the Ruby virtual machine can make efficient allocation for String instances. This is in sharp contrast to the Java programming language, for instance, which mandates immutability of strings and where multiple copies of strings with the same value can eat at the heap space.

Finally, the fact that strings can have embedded statements and creating multiline strings in Ruby is so easy has an interesting side effect when doing metaprogramming. In a nutshell, with some careful usage the metaprograms can be as readable as the code that they are representing. More on this in a later WILR Reason.

Sunday, November 11, 2007

ooPSLA 2007 - part 3


ooPSLA 2007 in Montreal, Canada - Oct. 21 to Oct. 25, 2007
(this is a continuation from my ooPSLA 2007 review part 1 and part 2)



Noteworthy talks
Aside from the keynotes, I attended a few other noteworthy talks. In particular I want to mention three.

Martin Rinard of MIT presented an Ownard! paper entitled Living in the Comfort Zone. This is a continuation of Martin's work on what he calls failure-oblivious computing or Acceptability-Oriented Computing. In a nutshell, Martin is fundamentally questioning our notion of usable software. My interpretation of his main argument follows. Since all software will have bugs, he questions whether we can approach software construction with the explicit assumption that these bugs do and will always exist and whether we can simply make the systems more resistant to inputs that cause problems. In recent studies for various small, but heavily used systems, he experimented and collected data that seems to corroborate his hypothesis that there is a comfort zone in which most inputs to the system could be reduced to and thus have working software that is resilient to bad inputs. He experimented with both the pine UNIX mail reader as well as the commercial AbiWord word processor.

The second talk was by a truly interesting fellow named Brian Foote who, over the years, has also acquired a reputation for asking tough questions at ooPSLA keynotes and talks. Brian's presentation was very unconventional and he was introduced as the 'conscience of ooPLSA'. Having had wine and a chat with Brian two days earlier, I would have to personally agree with the preceding words.

Brian's talk is based on a paper that he wrote with Joe Yoder entitled BIG BALL of MUD. Brian introduced his talk, if I recall correctly, as "An introduction to Post-Modern Programming." It certainly is an out of the ordinary talk by a rather unique speaker. It's a collage of what appears to be a random survey of important thoughts, ideas, people, and failures in modern software engineering. However, there lies what I think is a coherent message and thesis in all of this blurry and witty exposé, and it is that software, perhaps due to its human roots, seems to have a certain inertia toward complexity and convolution. Anyone who has worked on truly successful and decently BIG software systems can attest to this tendency. Things seem to generally get worse, not better. The code becomes a big mess and market and customer pressures always seem to prevent one from ever reengineer for the better. Some techniques may help, such as Test Driven Development or TDD and Refactoring, do help but they are not perfect, they can require considerable efforts, and they can aggravate things in other ways, thus just shifting the complexity elsewhere. Maybe Brian and Martin should meet and marry their ideas. Seems like Brian has identified the problem and Martin is pointing to one, albeit resigned, solution.

The final talk I want to mention is Erich Gamma's (IBM Rational) talk on Jazz. This is the latest initiative from the people and the company that brought you Eclipse---the Rational division of IBM. Jazz is sort of an expanded version of the Eclipse platform containing all sorts of jazzy add-ons for team collaboration. For instance, Erich showed how a distributed team of developers for the Jazz itself platform is setup and how he could join one of the sub-projects. After joining, Erich could list the current TODOs, features, team members, and so on. All bug reports can be assigned to any members of the team who can also collaborate by doing instant chats and sharing white boards and other collaboration tools. Everything is transparent to members of the teams. All events and actions are recorded as RSS/Atom feeds that team members can consume in the comfort of their favorite feed reader. All in all, Jazz promises to make for distributed software development what Eclipse did for individuals... While the demo had a few glitches and the room was crowded to the max, with people seating on the floor, I left feeling good being an IBMer and hopeful that Jazz will allow distributed software teams play together like these bands I hope to see again in New Orleans.

There are still some questions about Jazz's availability and license. I will stay out of it for two reasons. First, I personally don't think that everything that the IBM Eclipse team does should automatically be given away for free, we are a for-profit company, and in this case the OSS business model seems inexistent. Finally, I am not part of the Jazz team nor do I know the details behind the project or their motivations, goals, and customers.

Finally
A lot of people raved about the 50 in 50 talk by Dick Gabriel (IBM Research) and Guy Steele Jr. (Sun Microsystems) but I did not see it. I was visiting some old high-school buddies who live in Montréal and that who I had not seen for more than 15 years. So in many ways I had my own reminisce time and was happy to get second hand account of Dick's and Guy's exploit down memory lane.

In what will surely be my final ooPSLA 2007 post (truly this time) I will complete the braindump with a recall of the LINQ tutorial, the DSM/DSL workshop, and the DLS symposium. I'll briefly mention some highlights from the different panels I attended and give a brief carricature of interesting colleagues, friends, and folks I met at ooPLSA this year. Easily my favorite ooPLSA of all previous years.

Change history (marked with strikes and emphasis)
11/25/2007: minor English updates and additions

Wednesday, November 7, 2007

ooPSLA 2007 - part 2


ooPSLA 2007 in Montreal, Canada - Oct. 21 to Oct. 25, 2007
(this is a continuation from my ooPSLA 2007 review part 1)



Embarrassment of the riches

Photo of Fred Books and Bertrand Meyer (inventor of the Eiffel language)

The first thing that one notices from this year's ooPSLA program is that it is chockfull of BIG computer science names, such as, Turing award winners John McCarthy of Stanford and Fred P. Books of University of North Carolina Chapel-Hill (formerly of IBM), as well as David Parnas of University of Limerick, Ireland, Pattie Maes of MIT, and Gregor Kiczales of University of British Columbia (formerly of PARC). With so many famous keynoters, to fit the program in three days, some of the research paper sessions were concurrently scheduled and that created a vacuum at these sessions... Not a huge problem for a well-attended conference but made it difficult for me to attend 'non-famous' sessions.

I managed to see most of the keynotes and only missed Maes's talk---conflicted with yet another invited talk by Brian Foote of Industrial Logic, Inc. (now consulting at Google and other places.) More on Brian's talk in a future post.

Keynotes

Brooks talk was about collaboration and briefly about why the Mythical Man-Month thesis still remains mostly valid today after more than 25 years. I got three main takeaway from his talk: 1) software engineering is hard and still remains intricately a human-intensive process; 2) most important human works typically have one primary designer/architect. Brooks gave examples in all fields, e.g., Michelangelo's Chapel Sixtine, Frank Ghery's Guggenheim museum in Bilboa, Spain (see photo, credit Wikipedia.org), and many others; the one exception to the solo team structure is the pair, e.g., cited pair programming as a successful agile practice; and 3) Brooks thinks that object-oriented technologies are the closest, so far, to the elusive "silver bullet".



McCarthy's keynote centered around a new programming language called Elephant 2000 that he has been working on for a while. Mostly reminded me, and others, of efforts from the multiagent community on speech-act and agent-based programming. The best part of the talk in my opinion was a series of Intro slides on LISP's history that a member of the audience ask him to go over. It's always cool to reminisce on the beginning events that led to the creation of aspects of computer science. For instance, McCarthy mentions how DARPA provided the first funding in the 60's to create CS at MIT as a gift for the war efforts and how, as they do with all such grant, MIT divided the money to each of the major departments. The Math department got this influx of money and needed to spend it. Minsky and McCarthy were lucky to get some for a crazy idea that they called "artificial intelligence" along with a bunch of math graduate students...


Parnas's talk was mainly about documenting software systems and the difficulties that typically arise. Parnas gives as a current illustration of the problem (which may have ended recently) the long running battle between the EU and Microsoft over allowing competitors to interoperate with MS servers by giving enough documentation on Microsoft's protocols. Apparantly the first effort was in the order of 10K pages and was full of inconsistencies and contradictions... Parnas advocates more mathematical verbiage and precision to help this general issue.

Opinion
Overall, it was nice to see and (in some cases) meet these famous computer scientists. And their general focus on basic research and fundamental principles (gathered empirically or mathematically) is an important reminder to us all. I personally got a chance to speak to both Brooks and Parnas. So kudos to the ooPSLA organizers for inviting them.

However, I cannot help but also point out that both Brooks's and Parnas's talk focused on topics that seems to be a bit at odd with the ooPLSA mostly agile crowd. Brooks was intensely advocating having an architect and a level of 'blueprint' for the system and this could be seen to be a bit at odd with the fact that in the current world, time to market pressures and constant changes in the marketplace make this approach (to its fullest) mostly unrealistic thus the general idea of not doing big-design up front (aka BDUF) in agile teams.

Parnas's documentation rethoric, while clearly needed for a class of software systems, is likely unrealistic in most real-world settings where there is a shortage of programmers (and well educated engineers) and the fact that having working malleable code will get you your contract signed faster or get you ahead of competitors much quicker than one could finish reading the huge amount of of documentation needed to precisely describe even the simplest of systems. Case in point, I was part of the JavaPOS standard and it took a distributed team of more than 20 engineers (from more than 10 companies) greater than 5 years to create a specification (at 3K+ pages) for typical POS devices (i.e., Scanners, magnetic stripe readers, receipt printers, and so on) that could allow application providers to easily have their software working with different hardware vendors by simply changing an XML configuration file.

Next
In the next (and last) post on the ooPSLA 2007 conference, I will continue the braindump of the interesting sessions I attended. This will hopefully include Brian Foote's talk on BIG BALL OF MUD; the workshop on domain-specific modeling/language; some of the Onward! paper sessions; the LINQ tutorial by Eric Meyjer of Microsoft; the Eclipse Jazz presentation by Erich Gamma of IBM Rational; a few of the panels I attended on software engineering, languages, and SOA; and some parting thoughts mingling with many interesting minds and colleagues in the field.

Monday, November 5, 2007

Why study computer science?

Recently, an old high-school friend, back in Port-au-Prince, who contacted me after visiting my Web sites, asked me the following question.

> Why did you study computer science (I am just curious)? I am clueless about technology, but it does fascinate me.

Since I thought this was an interesting question, I told her I would blog my answer. So here it is, with some minor updates to elaborate some points.
It's hard to briefly answer why I chose this field of study (computer science) and have dedicated much of my life and energy to help advance it --- albeit with small success so far... However, if I was pressed for a concise and quick answer, I would simply say:

I chose computer science as my primary field of study because the computer (or computing machine) is maybe the most flexible, complex, useful, and potentially dangerous man-made invention. While the field (and associated sub-fields) have created many pragmatic and useful applications in the 60 (or so) years of existence, the computing fields have also opened many unanswered scientific questions ranging from a pure theoretical nature (e.g., P = NP? and the limits of computation and artificial intelligence), social (e.g., the impact of the Internet on freedom of speech and governments, gaps between have and have-nots, and so on), as well as deeply personal and human (e.g., entertainment, education, privacy, and self-expression) to list a few.

As a more elaborated example, the Internet and the Web (which arguably spawned from the sub-fields of networking and software and hardware engineering) have forever changed all aspects of the fabric of human civilization and while mostly beneficial, the resulting changes are not always positive---the Web can be a great tool to exchange family pictures as much it can be used for propaganda.

Being in Silicon Valley at what is really the beginning and forefront of this revolution is truly exciting and invigorating. Being part of the research community, tasked to help advance the field is an honor and privilege.

Thursday, November 1, 2007

ooPSLA 2007 - part 1

ooPSLA 2007 in Montreal, Canada - Oct. 21 to Oct. 25, 2007

The 20+ year old conference on object-oriented programming languages systems and applications (OOPLSA) is easily one of the top computer science conference in the field. This is where key revolutionary, evolutionary, and influential ideas and technologies such as Java, Agile, Aspects, Patterns, UML, Eclipse, and DSL (just to name a few) were mostly first discussed in an academic and open industrial setting. The inventors of these ideas and technologies make ooPSLA a GOTO date on their calendars and typically give tutorial and take part in panels with colleagues on the subject.

Photo of Dick Gabriel and David Parnas during Parnas's keynote

That long history and the impact of the technologies and ideas that the conference has spawned make it a star conference to attend for anyone in the field. The fact that acceptance rate is usually extremely low (10% or so) also gives more value to any paper that is accepted there. However, unlike what the name suggests, ooPSLA is more than a conference on object-oriented technologies, hence why Dick Gabriel (IBM Research distinguished engineer and ooPSLA 2007 conference general chair) and other organizers have started to refer to the OO part in lower case, thereby correctly minimizing the importance of OO in the conference as a whole.

ooPSLA and me
In late 1997, during my first year of graduate school, I took Ed Gehringer's class on OO programming at NC State University, and Ed, as a long time OOPLSA participant, introduced us all to OOPLSA and the various technologies being discussed there at the time: Java, Smalltalk, Eiffel, Patterns, UML, and so on. I attended my first OOPSLA in 1999 (Denver, CO) and have attended them sporadically ever since (including 1999 in Dever, 2000 in Minneapolis, 2001 in Tampa Bay, 2002 in Seattle, and 2005 in San Diego). This year, after one year absence, I am back to ooPSLA and this year I also have a short paper and poster on my current research.

In a nutshell I created a domain-specific language (DSL) in the Ruby language for Web APIs mashups. Taking advantage of the Rails framework, the DSL allows a high-level representation of the various parts of what it takes to create a mashup. Using some Ruby metaprogramming magic, the DSL constructs are converted to a full RoR Web applications with necessary plumbing to connect to the Web APIs (i.e., REST, RSS, Atom, and APP) and to present views for user interactions as well as the back-end logic for service interactions. Ajith Ranabahu and I built the first implementation of the platform by the code name Swashup (Situational Web Applications Mashups). At ICSOC 2007, Hernan Wilkinson, Stefan Tai, Nirmit Desai, and I published a paper focusing on the DSL itself.

In a subsequent post, I'll outline some notes and highlights I have captured during this year's conference.