Report on the JSPS Award for Eminent Scientists FY2014

Robert Kowalski, Professor Emeritus, Imperial College, London, UK

NII, October 13-21.

I prepared for the Logic and English lecture and workshop held at NII on 22 October. For this purpose, I updated the slides I used for my talks last year. I also commented extensively on the eight abstracts I received from PhD students before the workshop. I sent my comments on the abstracts to the students, so they could revise them before the workshop itself.

I also prepared for my other talks and visits. On 17 October, Satoh-sensei and I travelled to Waseda University, where we met Ueda-sensei and discussed the topics for my visit to Waseda University 4-5 December. We also met Okuno-sensei, in charge of the Logic and English workshop to be held for students in the Graduate Program for Embodiment Informatics.

During this period, I also had meetings with Satoh-sensei and the researchers working in his Proleg Project, formalising Japanese laws. In addition, I prepared a completely new set of slides for my other talks during this visit. The slides include a survey of many of the most important approaches to computing in the fields of programming, data bases, and knowledge representation and problem solving in artificial intelligence. I identify common features and differences, and conclude by proposing a single logic-based framework that combines the most important features of the different approaches.

NII, October 22 Lecture and Workshop on Logic and English.

I gave a talk on Logic and English followed by a three and a half hour workshop.

In the lecture, I emphasized two complementary issues involved in the logical use of natural language. The first issue is that many sentences of natural languages, such as English, have a hidden, but simple logical form, connecting conclusions with conditions. This logical form has the consequence that sentences in a logically coherent body of text typically form a triangle, or pyramid. In this pyramid, the main goal or conclusion is at the top, and the different ways of solving the goal and their associated subgoals form the body of the triangle. Effective communication involves presenting the goal at the top of the triangle early in the communication, and making it clear how lower level subgoals contribute to the solution of the goal.

The second issue is concerned with presenting information coherently. At the level of individual sentences, this can be achieved by presenting familiar information at the beginning of the sentence and new information at the end, with the sentence as a whole expressing the logical relationship between the old and the new information. In many cases, new information introduced at the end of one sentence becomes familiar information at the beginning of the next sentence.

These two issues need to be used together in formulating communications. In most cases, this can be done by formulating a succession of sentences in such a way that they present the goal first, and successively reduce it to subgoals, until the subgoals themselves can be accepted as facts or as acceptable assumptions.

There were eight abstracts presented and discussed at the workshop. Most of these had been revised by the students after they received my comments. All the students gave a short description of their abstracts, before we discussed the abstracts in detail. This had the advantage of giving us a better understanding of the topic of the abstract, before studying and discussing them in detail.

Most abstracts had a similar structure, starting with a description of the problem and sometimes of previous research, followed by a description of the new work and its contribution to the solution of the problem. This is a good approach, conforming to the second of the two issues that I discussed in my talk earlier in the day. However, the second issue, concerning the presentation of the goal-subgoal structure of the work proved to be more problematic in a number of cases, and most of the discussion revolved around how the treatment of this issue could be improved.

JAIST October 27-31.

I travelled to Kanazawa with Satoh-sensei on Monday 27 October. We discussed his work on extracting rules from legal cases, which he presented later at the JURISIN workshop. On Tuesday 28 October, we travelled to JAIST. I gave a lecture, intended primarily for the students, but also attended by some of the academic staff.

The lecture presented an overview of common features arising in such different areas of computing as programming, databases, and knowledge representation and problem solving in artificial intelligence. I argued that the main common feature of these systems is their concern with representing and generating state transitions, differing in their different notions of states and the events that cause state transitions. I pointed out that several prominent researchers in different areas of computing have identified a distinction between two kinds of systems: declarative systems, which have a logical semantics, and reactive systems, which are typically represented by rules of the form if conditions then actions, but which do not have a logical semantics. I presented a proposal for combining the two kinds of systems, giving them both a logical semantics, and showing how examples arising in existing systems can be reformulated in the proposed framework.

Following the talk, I held a number of discussions with different researchers, exploring how the issues presented in my talk might apply to their own areas of research.

On Wednesday 29 October, we held a writers' workshop in which five students presented their abstracts. The students had sent me their abstracts several days earlier, and I sent them my comments, which they took into consideration when presenting their abstracts at the workshop. In this workshop, differently from previous workshops, the students presented their research topic and went through their revised abstracts in front of the group. This was a very effective way of organising the workshop, which helped to involve the students more actively in the discussions. The workshop also benefited from attendance by a large number of other students who were not presenting abstracts. They contributed to a lively discussion, and most of them stayed until the end of the workshop, which lasted a little more than three hours.

On Thursday 30 October, we moved to a “Logic Camp”, which was a gathering of academics and senior researchers from JAIST working in the Logic and Software verification areas. The Camp was organised as a workshop for researchers to present their work to foster closer collaboration. I gave a shortened version of my “Towards a Science of Computing” lecture, improved after the comments I received following my Tuesday lecture. I also worked hard to follow the other talks and to identify relationships with my own work. I was especially pleased to discover that state transition systems played a major role in the work presented in many of the talks.

It was interesting and educational to learn that conditional rewriting systems, which transform states represented by terms, are the basis for the work on verifying software designs in Futatsugi-sensei's group. Ogata-sensei, in particular, presented an interesting application, and compared representations in two different formalisms for representing state transitions. The application is very close to the applications we intend for our own work, and I discussed the details of the representations with Ogata-sensei after the talk. He kindly sent me copies of his papers on the subject, which I plan to study in greater detail.

Hirokawa-sensei also gave a talk about state transition systems of a particularly simple, but very powerful form. His talk focussed on the convergence properties of such systems, which is also relevant to the topic of my talk.

Terauchi-sensei presented a survey of recent work on program verification, discussing the verification of a simple C program in detail. I was not surprised to see that state transitions play a major role in verification, but I was amazed to discover that representing state transitions by means of non-recursive Horn clauses is an important feature of this work. This fits in very well with the logic-based approach that I advocated in my own talk, and I discussed this with Terauch-sensei in greater detail, after his talk.

Tojo-sensei presented the work in his group on the formalisation of agent communication and change of belief over the course of time. He mentioned the use of autoepistemic logic to represent belief awareness, and argued for a possible world approach instead. I had heard an earlier version of this work last year. But I was able to understand the proposal much better with this presentation. I am now more sympathetic to the approach, and I am tempted to investigate it in greater detail in the future.

On Friday 31 October, I transferred to Kyoto. Monday 3 November was a National Holiday, and I took a holiday 4-7 November in compensation for my weekend attendance at the JURISIN workshop and Shonan Village meetings later in my visit.

Nara Women's University November 10.

I presented a revised version of my JAIST talks to the NWU students. I sent a copy of my slides to Nide-sensei, to distribute them to the students, so they could familiarize themselves with English terms and Computing concepts before the talk. Nide-sensei studied my slides before the talk and made several suggestions to help make the talk more accessible to the students. I addressed his suggestions, and this greatly improved not only the talk, but also my understanding of the issues involved in the talk. After the talk, I received a number of very helpful comments. In particular, Nide-sensei suggested that it would be interesting to explore extending the framework that I proposed in my talk to the case of continuous change of state, as needed, for example, in robotics. Satoh-sensei suggested that I compare the framework with Hoare logic for computer programs.

NAIST November 11-13.

I presented two talks on 11 November, the first, a further revision of my JAIST and NWU talks proposing a new Logical foundation for Computing; the second, on Computational Logic and its Relationship with Guidelines for English Writing Style. The second talk not only presented the background for the Logic and English workshops held on 12 and 13 November, but it also provided a further argument for the framework proposed in the first talk.

There were ten abstracts prepared for the workshops held on 12 and 13 November. I began the first workshop by giving a short presentation drawing attention to the main issues I expected to arise in the discussion of the abstracts: namely, the desirability of presenting ideas in a top-down manner, and of presenting old, possibly familiar information before introducing new information. In many cases, the connection between old information and new information is a logical one.

We discussed six of the abstracts on the first day. Each student first explained the topic of the abstract informally, and then we went through the abstract sentence by sentence. We confirmed that the two issues I focused on in my short presentation at the beginning of the workshop were relevant to all of the abstracts. However, in addition, we observed another problem that I had failed to emphasize earlier: namely, the value of referring to previously mentioned topics precisely, to avoid forcing the reader to engage unnecessarily in the effort of searching for the intended referent. On the other hand, we also observed that in some cases it can be unhelpful to provide the reader with too much detail. In such cases, it can be helpful to use a more abstract term in preference to an expression that is more concrete, but too detailed.

We discussed five abstracts on the second day. The discussion was more straightforward, and the issues were more clear cut than on the first day. I would like to think that, at least to some extent, this was because the students had a better idea of how to write more logically, after the previous day's workshop.

University of Kyoto, 17-20 November.

I had several meetings with Yamamoto-sensei and Satoh-sensei on 17 and 18 November. We discussed a number of topics, including the forthcoming Shonan Village meeting and the problem of extracting knowledge in logical form from natural language texts.

On 17 November, I received abstracts for the Logic and English workshop on 20 November. I commented extensively on the abstracts I received, and I also wrote and sent the students a collection of general guidelines for writing style, based on the problems that I identified in their abstracts.

On 18 November, Satoh-sensei gave a lecture about his work on answering yes-no questions and giving explanations for answers to Japanese language questions, using his legal reasoning system Proleg. Afterwards, we discussed the relationship between syntactic analysis and semantics, needed for this task.

On 19 November, I participated in Yamamoto-sensei's seminar, in which three students presented their work in English. Two of the talks were given by students who also submitted abstracts of their work for the Logic and English workshop on 20 November. These two talks were a big help in understanding the abstracts that the students had sent me earlier.

It was interesting to observe that all three talks suffered from the same problem of presenting the details of the work, but neglecting to clarify the relationship between the top-level goals and subgoals of the work. This was also a problem with the abstracts, which I had read and commented upon earlier.

I led a workshop on Logic and English on the morning of 20 November. At the beginning of the workshop, I presented a general summary of the main logical principles needed for effective natural language communication. I had also prepared for the workshop by editing and improving the students' abstracts before the actual workshop itself. The discussion of the abstracts was therefore much more thorough than it would have been otherwise. During the discussion, it became clear that there were a number of points that the writers had wanted to make in their abstracts, which I had not adequately understood. As a result, the workshop was a two-way discussion, during which both the students and I learned from one another. I found this two-way discussion very enjoyable, and I think it was also good for the students to engage with the teacher in such a mutual learning experience.

In the afternoon of 20 November, I gave a further revision of my “Science of Computing” talk. Yamamoto-sensei asked me about the typical applications of the framework that I presented in my talk. I realised that I had not made it clear in my talk that the framework has both a theoretical purpose and a practical purpose. Its theoretical purpose is to provide a single, unifying framework for all areas of computing. Its practical purpose is to serve as the foundation of a single computer language that can be used for all applications. However, because different classes of applications have different features, it would probably be useful to develop specialised variants of the language, optimized for different application domains.

Yamomoto-sensei and I also disscussed the topic of Logic and English, and he drew my attention to the book “Towards Survival English” in Japanese by Nishimura Hajime. We discussed and compared the relationship between the principles that I focussed on in the workshop and the principles advocated in the book. Yamomoto-sensei gave me a spare copy of the book, and I studied the English examples in the book, on the train, returning to Tokyo on 21 November.

JURISIN November 23-24

I attended and participated in the discussions of the workshop. Many of the talks were related to my research background and interests.

I appreciated the invited talk about ontology engineering by Mizoguchi-sensei. I was especially pleased to see that the practical applications presented in the second half of the talk all involved the reduction of goals to subgoals. I discussed this feature with Mizoguchi-sensei after his talk, and mentioned that such goal-reduction rules can be represented in logic programming form. One advantage of such a representation is that the logical representation can be evaluated to determine whether it is true. In some cases, it may be possible to show that certain conditions are lacking for the goal-reduction to be true.

There were many different approaches presented in the different talks. Some of them used some form of logic. Many of them used no logic at all. It was disappointing to see so little agreement about how to represent legal texts and legal relationships. I challenged some of the speakers to clarify whether their approach was chosen for technical or sociological reasons.

The invited talk by Bart Verheij presented an approach that combines and unifies Bayesian probability, argumentation and scenarios. However, it does so at the expense of loosing the expressive power of logic. I suggested the use of abductive logic programming as an alternative, which has similar capabilities, but also includes the knowledge representation and problem solving capabilities of logic programming. The naturalness of a logic programming approach was confirmed by its implicit use in the work presented by Nguyen Le Minh from JAIST. His work could be interpreted as using annotations of natural language texts, to distinguish between the conditions and conclusions of sentences, as is done in logic programming.

Yusuke Miyao from NII talked about using a set theoretic representation of the meaning of natural language texts. We discussed whether this representation could equally well be formulated either in description logic or in logic programming form. Similarly, Chitta Baral from Arizona State University described an approach to translating natural language text into a target logical language, using Montague's lambda calculus. I was interested in whether this approach might be suitable for the use of logic programming as the target language.

Japan-Korea Workshop on Law and Informatics, NII. November 25

The workshop consisted of a number of presentations from Korean and Japanese scholars working in this field. The Korean presentations were not very technical, but mainly described existing systems. However, I was interested to see the emphasis given to Complex Event Processing (CEP) in the talk by Hanmin Jung. CEP is an important application for the LPS framework that I presented in my talks “Toward a Science of Computing”.

The Japanese presentations presented more technical details both about existing systems and about systems under development. Shozo Ota from the Law School of University of Tokyo presented work combining Bayesian probability for finding “facts” and fuzzy logic for determining legal subsumption. I questioned the need for this combination, and asked about its relationship with existing approaches to knowledge representation and problem solving in artificial intelligence.

I was interested to see that the work presented by Akira Shimazu from JAIST, for translating natural language texts into logic distinguishes between requisites and effectuation, in a manner that is similar to the distinction between conditions and conclusions in logic programming. It was not clear to me, however, whether this approach is able to incorporate exceptions, as in the Proleg system of Satoh-sensei.

Nitta-sensei and Tojo-sensei also presented their work. I was already familiar with this from earlier talks and discussions, but the talks helped me to refresh my understanding of their work.

Shonan Village Workshop: towards explanation production combining natural laanguage processing and logical reasoning. 26-29 November.

The workshop was attended by a number of researchers who had also presented their work at the JURISIN and/or Korea/Japan workshop, as well as by a number of researchers from Europe, mostly working on natural language processing (NLP). Although most of the presentations focused on one of these two approaches, a few of them combined both NLP and logical approaches. I became convinced that the combination of these two approaches is very appropriate and that it has huge potential for extracting deep meaning in logical form from texts that are distributed around the Web.

I also became convinced that that this combination can benefit from the use of machine learning techniques. Experts can annotate a training set of natural language texts with the information needed to generate the logical representation of the text. The training set can then be used to learn how to generate the logical representation of new texts.

As the talks presented at the workshop proved, a major hurdle in realising this combination of NLP, logic and machine learning is the failure of logicians to agree upon a logic that is adequate for this task. However, I believe that there is overwhelming evidence that logic programming, with its focus on the distinction between simple conclusions and more complex conditions, provides a suitable formalism for this purpose. In particular, it resembles the condition-action rules of production systems, which have been widely used a model of human thinking.

I was greatly encouraged in reaching my conclusions by Inui-sensei's talk on modeling “reading between the lines” based on scalable and trainable abduction and large-scale knowledge acquisition. Although logic programming did not feature explicitly in his presentation, it encouraged me to believe that abductive logic programming (ALP) could serve as a powerful framework for representing and reasoning about the meanings of natural language texts. ALP combines ordinary logic programs for representing definitions of predicates, with (undefined) abducible predicates, which can have associated probabilities. Abducible predicates can be constrained by integrity constraints, which are similar to the obligations and prohibitions that are often be found in legal texts.

There have been many applications of ordinary logic programming to the representation of legal texts, especially to legislation. I believe these applications, especially those formalised by Satoh-sensei's Proleg system, can provide a sound foundation on which to build systems for understanding more informally written texts.

I also believe that there is further evidence for the use of abductive logic programming, based on my analysis both of English language texts that are designed to be easy to understand, and on the advice given by English scholars about how to write English texts that are easy to understand. One feature of such advice, which can usefully be exploited when attempting to generate logical representation of texts, is that sentences should start with “old”, familiar information and end with “new information”. The new information at the end of one sentence can serve as the old information at the beginning of the next sentence.

I believe that this “old-new” feature of natural language texts can help a computer system disambiguate text, by considering the logical representation of paragraphs instead of individual sentences. Although “understanding” paragraphs may seem to be a harder task than understanding individual sentences, it may actually turn out to be easier, because in the context of a paragraph, there are typically fewer sensible ways to disambiguate the meanings of individual sentences in the paragraph.

Extracting logical forms from natural language texts can be used for many purposes, including not only deductive reasoning, but also abductive and inductive reasoning. Entailment and explanation are two among many other applications. In summary, I believe that much of the work on NLP and machine learning can be used to support this task. In particular, it can help to identify predicate argument structure, which is the atomic building block of logical representations.

Hokkaido University, 1-3 December.

I travelled with Tanaka-sensei from the Shonan Village meeting to Sapporo on Sunday 30 November. We discussed the presentations and compared our conclusions. Both of us concluded in particular that abduction would need to play an important part in the extraction of logical forms from texts on the Web.

On Monday 1 December, I gave another talk about Logic and English, but this time informed by what I had learned at the Shonan Village meeting. I revisited one of the examples I often use in my talks. Consider the sentence: “We could not understand the talk, because we did not understand the topic of the talk.” Clearly, to understand the sentence, a reader needs to know or believe that if a person does not know the topic of a talk then the person will not understand the talk. Either the reader needs this general, logical knowledge as background knowledge, or the reader needs to abduce this knowledge from the text itself. It seems to me that this kind of abduction is relatively straight forward, and much simpler than induction.

After my talk, I discussed the Webble system with Micke Kuwahara, joined later by Tanaka-sensei. I was familiar with Webble from my visit to the Meme Media Lab last year, and from my earlier knowledge of Intelligent Pad, which is one of the predecessors of Webble. I was impressed by how closely the Webble concept is to the vision of reusing logical knowledge extracted from text on the Web, which was one of the main topics in the Shonan meeting. Webbles are like logic programming clauses. Provided clauses extracted from different sources use a shared ontology, they can be treated as memes that can be extracted from one source and combined with memes extracted from other sources. The combined memes (or clauses) can then be reused for other, previously unimagined purposes.

It became clear during my discussion with Kuwahara-san that combining clauses (or memes) extracted from different sources depends critically on their sharing the same ontology. This seems to be a problem that can be addressed using NLP techniques.

I also asked Tanaka-sensei his opinion about shared memory versus message passing approaches to concurrency. This issue is relevant both to how Webbles and clauses in logic programming extracted from different sources are combined, and to my related interest in combining intelligent agents into multi-agent systems.

On Tuesday, I met several members of Tanaka-sensei's lab, who showed me their work, and we discussed its relationship with other work with which I am also familiar. In the afternoon, I organised a Logic and English workshop, in which we studied five different abstracts. Two of the abstracts were quite well written, and their topics were fairly easy to understand. Nonetheless, it was useful to analyse the abstracts sentence by sentence to draw explicit attention to the features that made the abstracts easy to understand.

Two of the abstracts were hard to understand. One of them seemed to have a large gap between the problem motivating the work and the technical details of the work. It was difficult for the student to fill in the missing detail, and is it hard to avoid the hypothesis that the student's solution of the problem is unnecessarily complicated. The research reported in the other abstract seemed to be sound, but the student could not clearly explain the relationships between the different parts of the solution. We discussed the student's work in great detail, but without coming to any very concrete solution. In any case, the student was able to understand the problems with his writing, and to see that they involved problems in his thinking about his research topic. In the case of the fifth abstract, although it was not very well written, we were able to identify improvements, which made the abstract much easier to understand.

On Wednesday morning, before returning to Tokyo in the afternoon, I gave a revised version of my talk “Towards a Science of Computing”. Tanaka-sensei commented extensively on the talk. He mentioned, in particular, the use of continuations in functional programming, to give semantics to state transitions in procedural languages. It is not clear to me, however, whether this can be regarded as a theoretical solution to the frame problem for functional languages, in the same sense that the situation calculus is a solution to the frame problem for logic-based languages, or whether this is a more practical solution, in the same sense that LPS solves the practical aspects of the frame problem for logic-based systems. I plan to explore this matter more closely in the future.

After my talk, we discussed possible applications of LPS to the problem of federating mobile devices. The use of reactive rules in LPS seems to be particularly relevant to this application, and its use of explicit time is an alternative to the modal logic currently being used for this application. Tanaka-sensei also presented other work being done in his lab.

Waseda University, 4-5 December.

I gave two talks on Thursday 4 December. The first was on Logic and English, which served also as a preparation for the workshop on the following day. The second was a lecture on logic programming for approximately 200 third year students taking Ueda-sensei's course on programming languages. Both talks went very well. There was a lot of discussion after the first talk, and many interesting comments and questions came up.

The second lecture also went very well. I was amazed at how quiet and attentive the students were. There were a couple of mistakes on my slides, and I used them to engage the students in a discussion at the end of the lecture. The discussion was quite lively, despite the large size of the class.

Seven students submitted abstracts for the writers workshop on Friday 5 December. Most of the abstracts were very technical, and could be understood only by a very limited audience. Before the workshop, I needed to search the internet to understand some of the terms used in the abstracts. I was worried that the workshop would not be very useful, because I could understand so little of the content of the abstracts.

The workshop was very hard work, both for me and for the students. We took five hours to discuss and improve seven abstracts. But the results were far better than I expected. Even though many of the ideas described in the abstracts were very complicated and impossible for a non-expert to understand, it was possible in all cases to identify structural relationships between the concepts described in the abstracts, such as cause and effect, and goals and subgoals.

It was possible to see that in some cases, the abstract described what work had been done, but not why the work was done. More generally, in some cases, sentences did not clearly identify the goal-subgoal relationships, sometimes leaving out important steps, steps that might have been obvious to the writer, but probably not to the reader. In many cases, sentences were reversed: They introduced new information at the beginning of the sentence, followed by “old” information that was presented in earlier sentences. Reversing the order of the old and new parts of the sentence made the abstracts more coherent.

We saw that in many cases, information was hard to understand because it needed to be explained in greater detail. In many of these cases, the problem could be solved by deleting the information and providing the detail later in the abstract or by not providing it at all.

I was amazed that in most cases, we started with an abstract that I could not understand, and we ended with an abstract, which was not much longer than the original, but which even I, with my limited background knowledge, could easily understand. I also had the impression that the revised abstract was also easier for readers who had expert knowledge of the topics of the abstracts.

NII, December 8-13.

Satoh-sensei and I discussed a talk given by Professor Giovanni Sartor at NII on Friday morning, 5 December, before I left for the workshop at Waseda University. We both thought that it might be possible to replace the defeasible deontic logic that Professor Sartor used in his talk, for deciding between different interpretations of a legal text, by preferences between different ways of achieving goals expressed in an abductive logic programming (ALP) framework. We examined this possibility in some detail, and compared out proposed approach with the use of sanctions as an alternative way of dealing with obligations.

These discussions were very promising. In particular, they suggested a simple and elegant solution to the notorious “Chisholm paradox”, which is a benchmark for reasoning about obligations.

<BACK>