9/12/2016

Update on the Software Languages Book

Deleting this post as it superseded by newer updates.

8/25/2016

Scoped global variables in Prolog

It just so happens that I needed global variables in some Prolog code.

In fact, I needed more carefully scoped global variables. SWI-Prolog's global variables are really global. (Well, they are thread-scoped, for what it matters.) This is not good, if you need a lot of global variables and maybe even in different parts of an application.

An uninspiring approach would be to fabricate global variable names in a manner that they are scoped internally by some name prefix. It was more fun to achieve scope by means of actually using one truly global variable to provide many scoped variables. Here is a demo:

?- logvars:get(myscope, a, X).                            
true.

?- logvars:get(myscope, a, Y).
true.

?- logvars:get(myscope, a, X), logvars:get(myscope, a, Y).
X = Y .

?- logvars:get(myscope, a, X), logvars:get(myscope, b, Y).
true .

Here is the code:

https://github.com/softlang/yas/blob/master/lib/Prolog/logvars.pro

Inlined below:

% (C) 2016 Ralf Laemmel
:- module(logvars, []).

/*
get(+S, +N, -V): Retrieve the value V of the global variable named N
and "scoped" by the global variable S. The variable N is
"automatically" created with a fresh logical variable V as initial
value.
*/

get(S, N, V) :-
    atom(S),
    atom(N),
    ( nb_current(S, _) ->
   true;
          nb_setval(S, _) ),
    nb_getval(S, (Ns, Vs)),
    varnth0(Pos, Ns, N),
    nth0(Pos, Vs, V).   

/*
varnth0(-Pos, ?Ns, +N): given a list Ns of atoms with a variable tail,
find the name N in Ns or, if not present, unify the first variable
position of Ns with N, and return the position as Pos. This is a
helper for get/3.
*/

varnth0(Pos1, [N1|Ns], N2) :-
    atom(N1),
    atom(N2),
    ( N1 == N2 ->
   Pos1 = 0;
          varnth0(Pos2, Ns, N2),
          Pos1 is Pos2 + 1 ).
varnth0(0, [N1|_], N2) :-
    var(N1),
    atom(N2),
    N1 = N2.

8/17/2016

Megamodels of Coupled Transformations

While having fun at UvA in A'dam by means of teaching Haskell in a pre-master summer school, I also manage to walk across the street to see folks at CWI. Will be giving a presentation on coupled transformations and megamodeling.

Title: Megamodels of Coupled Transformations

Abstract: Many software engineering contexts involve a collection of coupled artifacts, i.e., changing one artifact may challenge consistency between artifacts of the collection. A coupled software transformation (CX) is meant to transform one or more artifacts of such a collection while preserving consistency. There are many forms of coupling—depending on technological space and application domain and solution approach. We axiomatize and illustrate important forms of coupling within the Prolog-based software language repository YAS (Yet Another SLR (Software Language Repository)) while relying on a higher-level predicate logic-based megamodeling language LAL for axiomatization and a lower-level megamodeling language Ueber for build management and testing.


Date and Time: 19 August, 11:00am


Related links:

8/03/2016

An updated update on the software language book

Deleting this post as it superseded by newer updates.

6/12/2016

Status update on the Software Language Book

Deleting this post as it superseded by newer updates.

6/05/2016

Responding to reviews of rejected conference papers

This post is concerned with this overall question:

How to make good use of reviews for a rejected conference paper?

The obvious answer is presumably something like this:

Extract TODOs from the reviews. Do you work. Resubmit.

In this post, I'd like to advocate an additional element:

Write a commentary on the reviews.

Why would you respond on reviews for a rejected conference paper?

Here are the reasons I can think of:
  • R1You received a review that is clearly weak and you want to complain publicly. I recommend against this complaint model. It is unfriendly with regard to the conference, the chairs, and the reviewers. If one really needs to complain, then one should do this in a friendly manner by direct communication with the conference chair.
  • R2: You spot factual errors in an otherwise serious review and you want to defend yourself publicly. There is one good reason for doing this. Just write it off your chest. There is two good reasons for not doing it. Firstly, chances are that your defense is perceived as an unfriendly complaint; see above. Secondly, why bother and who cares? For instance, sending your defense to the chairs would be weird and useless, I guess.
  • R3: You want to make good use of the reviews along revision and document this properly.

R3 makes sense to me. 

R3 is what this post is about.

We respond to reviews anyway when working on revisions of journal submissions because we have to. One does not make it through a major revision request for a journal paper unless one really makes an effort to properly address the reviewer requests.

Some conferences run a rebuttal model, but this is much different. Rebuttal is about making reviewers understand the paper; revision of a journal paper is more about doing enough of presentational improvements or bits of extra thinking and even research so that a revision is ultimately accepted. 

In the case of a rejected conference paper and its revision, I suggest that a commentary is written in a style, as if the original reviewers were to decide on the revision, even though this will not happen, of course. It remains to be decided on a case-by-case basis whether and how and when the commentary should be made available to whom for what purpose. 

Not that I want my submissions to be rejected, but it happens because of competition and real issues in a paper or the underlying research. My ICMT 2016 submission was received friendly enough, but rightly rejected. The paper is now revised and the paper's website features the ICMT 2016 notification and my commentary on the reviews. In this particular case, I estimated that public access to the notification and my commentary will do more good than bad. At the very least, I can provide a show case for what I am talking about in this blog post.

With the commentary approach, there are some problems that I can think of:
  • P1: Reviewers or conference chairs feel offended. Without being too paranoid, the reviewers or the chairs could receive the commentary as a criticism of their work. For instance, the chair may think that some review was not strong enough to be publicly exposed as a data point of the conference. I have two answers. Firstly, an author should make an effort to avoid explicit or subliminal criticism. (We know how to do this because of how we deal with journal reviews.) Secondly, dear reviewers and chairs, maybe the world would be a better place if more of the review process would be transparent?
  • P2: Prior reviews and your commentary could be misused by reviewers. There is some good reason for not exposing reviewers to other reviews of the same paper (or a prior version thereof), not until they have casted their vote at least, because they may just get biased or they may use these other views without performing a thorough analysis on their own. This is a valid concern. This problem may call for some rules as to i) what conferences are eligible for passing reviews and commentary to each other and ii) when and how commentary can be used by reviewers. 
  • P3: Your commentary is perceived as putting pressure on reviewers of the revision. At this stage of our system, I don't propose that reviewers should be required in any way to consider the commentary on a previous version of a paper because reviewing is already taking too much time. All what I am saying is that reviewers should be given the opportunity to access previous reviews and the author's commentary, at least at some stage of the review process. Reviewers are welcome to ignore the commentary. In fact, some reviewing models may be hard to unite with the notion of commentary. For instance, I don't now whether it would work for the double blind model.

In summary, commentary on rejected conference submissions is a bit like unit testing. We should do it because it helps us to test our interpretation of the reviews in a systematic manner. Without such testing we are likely to i) complain non-constructively about the reviews; ii) ignore legitimate and critical issues pointed out by the reviews; iii) as a consequence, resubmit suboptimal revisions and busy program committees. So we do not really write this commentary for future reviewers; we rather write the commentary for us. However, we write it in a style that it could be used for good by future reviewers. 

Once the community gets a bit more used to this idea, we could deal with commentaries pretty much in the same way as with optional appendices in some conferences. One risk is the one of bias when reviewers are exposed to previous reviews and author claims in the commentary. Another risk is that a badly implemented process for commentaries would just cause more work for both program committees and authors. Maybe, I am thinking a bit too revolutionary here, but I am looking forward a system where we break out of the static nature of program committees and allow for review results and author responses to be passed on from conference to conference. I am thinking of a more social process of reviewing and revision.

Regards,
Ralf