LING 4424: Truth in a Model


In previous lectures, we have discussed how we can parse a sentence and create a logical form for it using first-order logic, the lambda calculus, and combinator parsers. As review, let's look at what our fragment looks like so far.


As our logic we use a simple first-order logic. Though this is not quite as expressive as most first-order logics, it is sufficient to show how the logic is constructed in OCaml, and could be easily extended to account for similar, more expressive logics.
 * A term is the basic atomic unit. It represents an individual or is a
 * variable over individuals.
type term = 
  | Variable of string 
  | Individual of string

 * There are two quantifiers in this logic: universal and existential. We give
 * these the names All and Some.
type quantifier =
  | All
  | Some

 * The guts of our logic's syntax. This defines the sorts of expressions which
 * constitute well-formed formulas.
type formula =
  (* Predicates represent n-ary relations *)
  | Predicate of string * term list
  (* Material implication *)
  | Implication of formula * formula
  (* Conjunction *)
  | Conjunction of formula * formula
  (* Quantifiers *)
  | Quantifier of quantifier * string * formula * formula

Note that this defines only the syntax of our logic: it specifies which formulas are possible, but it does not actually specify what those formulas mean. There is nothing here that tells the computer that Implication (a, b) should be true just in case a is false or both a and b are true. In other words, we are still missing a semantics for our logic.


Before moving on to the semantics of first-order logic, we need to be able to build up formulas from natural language sentences. Typically, in formal semantics, this is done using the Lambda Calculus. Conveniently, OCaml is a superset of the lambda calculus itself, so we do not need to redefine the lambda calculus ourselves. Instead, we can rely on OCaml's own control flow to do much of the work.

Normally, for example, the noun student might be represented as λx.student(x). Given that student is here intended to be used as a predicate, we can easily represent this in OCaml using anonymous functions and the formula type we defined

fun x -> Predicate ("student", x)

At this point, you may note that there are two kinds of variables floating around. There are OCaml variables and there is the Variable constructor of our term type. This is because our lambda calculus is represented using OCaml, and thus uses OCaml variables. However, FOL has variables of its own, in particular in quantified expressions. FOL variables cannot be easily represented as OCaml variables, so they are given their own constructor.

There is still a problem with FOL variables. OCaml variables are nice, because OCaml has rigid scoping rules. However, using strings as variable names we are not guaranteed the scoping rules that we want. For example, consider the following formula, which is equivalent to the FOL expression ∃x[student(x) & ∀x[program(x) → wrote(x, x)]].

Quantifier (Some, "x", student(Variable "x"),
  Quantifier (All, "x", program(Variable "x"), 
    wrote(Variable "x", Variable "x")))

If we bind the variable x twice, we can't say how this expression will be interpreted. The exact result is dependent on our implementation of the semantics, but we'd be better off avoiding the issue altogether. There wouldn't be a problem if we bound a different variable name every time, but how do we do that? The answer is with a gensym (generate symbol) function, which creates a new symbol each time it is called.
 * Generates a new variable name, guaranteed to be unique.
let gensym =
  let n = ref 0 in
  fun _ ->
    n := (!n) + 1;
    "variable" ^ (string_of_int !n)

This function is unusual in OCaml, in that it is impure: given the same input, it will return different output depending on the program state. This is implemented using the ref keyword, which creates a mutable variable (whereas normally we use only immutable identifiers). We then increment the variable by one each time the function is called.

(* Has the value "variable1" *)
let v1 = gensym ()

(* Has the value "variable2" *)
let v2 = gensym ()

The last thing that we need for parsing is the parser itself. We can re-use a lot of the code from our earlier work with combinator parsers. The type of our main parser will be formula combinatorparser. The type of each phrase-level parser will be different depending on the type of the lambda expression that it represents. For example, nouns are of the form λx.student(x), so a noun parser needs to have the type (term -> formula) combinatorparser.

With combinator parsers for semantics, we need to define both what is grammatical for a parser and how the resulting structures are combined semantically. This is done by assigning a semantic composition rule for each syntactic rule. A table describing a simple grammar is given below, along with the code describing the parser itself.

Syntax Rule Semantics Rule
S NP VP ⟦NP⟧(⟦VP⟧)
NP Det N OptRel ⟦Det⟧(⟦OptRel⟧(⟦N⟧))
NP PropN ⟦PropN⟧
VP TV NP λx.[⟦NP⟧(λy.(⟦TV⟧(x,y)))]
OptRel ε λx.x
OptRel that VP λP.λt.[P(t) & ⟦VP⟧(t)]
let rec s words = ((np &. vp) >. (fun (lfNP, lfVP) -> lfNP lfVP)) words
and np words = (
    ((det &. n &. optrel) >.
      (fun ((lfDet, lfN), lfRel) -> lfDet (lfRel lfN))) |.
  ) words
and vp words = (
    ((tv &. np) >.
      (fun (lfTV, lfNP) -> fun subj ->
        lfNP (fun obj -> lfTV subj obj))) |.
  ) words
and optrel words = (opt ((that &. vp) >. (fun (_, lfVP) cn t ->
  let lfCN = (cn t) in
  let lfVP' = (lfVP t) in
  Conjunction (lfCN, lfVP')))) words

Now we just need to define our lexicon. The lexicon is mostly a matter of translating FOL to OCaml. We also need to make sure that our empty parser is a lambda calculus parser. The empty parser simply has the semantics of the identity function: it has no effect other than to combine vacuously with some other lambda expression.
let empty =
  let identity x = x in
  fun words -> [(identity, words)]

(* We can make any (functional) parser optional by disjoining it with empty *)
let opt p = p |. empty

The rest of our vocabular proceeds mostly as usual, except that we have to supply lambda expressions.
let n = function
  | "program" :: suf -> [((fun t -> Predicate ("program", [t])), suf)]
  | "student" :: suf -> [((fun t -> Predicate ("student", [t])), suf)]
  | _ -> []

Truth in a Model

So now we can parse to logical formulas. But what do those formulas mean? How do we get our program to interpret them? One of the simplest things we can do (and the gateway to more complex things) is to tell the program what the world looks like, and then have it tell us whether a particular sentence is true or false in the world.

Of course, we can't tell our program everything about the world. Instead, we need to model the world somehow. This is traditionally done using sets.

The truth of a formula such as sleeps(john) is defined as john ∈ sleepsM where sleepsM is the set of sleepers as defined by a model M.

What about two-place predicates? For two-place predicates, we can still use sets, but now we use sets of pairs of individuals. Each pair represents two individuals for which the relation holds. Again, wrote(x, y) will be defined as true iff (x, y) ∈ wroteM.

Three-place predicates (such as "give") work similarly, and four-place predicates (if they exist), and so on. To make things a little easier on ourselves, let's represent all of these the same way in the model, and use sets of lists instead of sets of pairs.

Here's our preliminary model type:
(* Lets us use lists of terms in sets *)
module OrderedTermList =
    type t = term list
    let compare = compare

(* Defines sets of term lists *)
module TermListSet = BatSet.Make(OrderedTermList)

type model = {
  relations : (string * TermListSet.t) list;

The relations field is an association list mapping relation names (such as sleeps) to the set representing the relation itself. An example is given below.



{john, terry}


{⟨terry, shrdlu⟩, ⟨shakespeare, hamlet⟩}

At this point, we know how to handle predicates. But our logic has more than just predicates in it - we need to know how to handle conjunction, implication, quantification, and whatever additional kinds of expressions we might eventually add. As a matter of fact, we've actually done something similar already: we defined the evaluation of arithmetic expressions.

type expression = 
  | Number of int
  | Addition of expression * expression
  | Subtraction of expression * expression
  | Multiplcation of expression * expression
  | Division of expression * expression

let rec evaluate = function
  | Number x -> x
  | Addition (x, y) -> (evaluate x) + (evaluate y)
  | Subtraction (x, y) -> (evaluate x) - (evaluate y)
  | Multiplication (x, y) -> (evaluate x) * (evaluate y)
  | Division (x, y) -> (evaluate x) / (evaluate y)

What we will do for FOL is very similar, except that we evaluate with respect to a particular model and base case for predicates is a little more complex. This paradigm also highlights the distinction between the syntax of a language on the one hand and its semantics on the other. Given the syntax in expression, I could define many different evaluate functions. Don't assume that just because you're familiar with a particular syntax that the semantics is what you are going to expect! There are, in fact, many different ways of formulating FOL (and other logics) that look the same, but behave very differently at the level of semantics.

Our evaluation function will look something like this. It's up to you to define it more precisely.
let rec evaluate model = function
  | Predicate (name, terms) -> failwaith "Not implemented"
  | Implication (left, right) -> failwith "Not implemented"
  | Conjunction (left, right) -> failwith "Not implemented"
  | Quantifier (All, var, left, right) -> failwith "Not implemented"
  | Quantifier (Some, var, left, right) -> failwith "Not implemented"

Let's consider the definitions of the remaining kinds of expression.


Implication is fairly simple in FOL. We have a left part (the antecedent) and a right part (the consequent). The expression is true if the antecedent is false or if both the antecedent and the consequent are true. Of course, in our case, we will need to recursively evaluate the antecedent and consequent.

p q p → q
0 0 1
0 1 1
1 0 0
1 1 1


Conjunction is quite similar. Now we want the expression to be true iff both the left and right components are true.

p q p & q
0 0 0
0 1 0
1 0 0
1 1 1


Quantification is where a lot of the real complexity is. First, let us consider our definition for the syntax of quantifiers. Quantifiers have the form Quantifier(quant : quantifier, var : string, left : formula, right : formula). Compare an example in FOL: &exists;x.[P(x) & Q(x)]. There's a difference between the two: the OCaml version doesn't specify the connective between the left and right components. Why did we do this?

In linguistics, it is common to talk about generalized quantifiers. The idea is that quantifiers in natural language generally have the same type. However, different quantifiers differ in the logical connective used. The existential quantifier is paired with the connective & while the universal quantifier is paired with the connective . While ∀x.[P(x) & Q(x)] is a well-formed formula of FOL, it is rarely what we want in natural language translations. Can you see why?

When we define our semantics for quantifiers, we will need to keep this in mind, and convert the left and right expressions into a new expression of the appropriate form.

The last thing we need to do is consider how the quantification itself actually works. How do we tell if something is true "for all x". Essentially, what we want to do is actually iterate over all possible individuals and check if the body of the quantifier is true. We can mostly do this with OCaml list iteration tools such as List.exists and List.for_all, but we also need to handle the variable.

Essentially what we are going to do is iterate over every possible assignment for a the variable bound by the quantifier and check the value of the body given that assignment. There are two things that we are missing before we can do this. First, we need to be able to iterate over all possible individuals. What is the range of possible individuals? Unfortunately, we need to tell the program this. We can add this to the model, extending it with an individuals field which simply states who/what is there in the universe.
type model = {
  relations : (string * TermListSet.t) list;
  individuals : term list

All right, now we can iterate over all possible values for x, but how do we assign these values to x in the body of quantification? This is done with a substitution function, which is up to you to define, but closely follows the paradigms that we've been using throughout the class. Being able to substitute variables for other values is a common process in formal languages, including the definition of programming languages. In our case, we don't need to worry about shadowing or other issues that may come up, since we've used gensym to define all of our variable names.
let rec substitute varname formula repl =
  match formula with
  | Predicate (name, terms) -> failwith "Not implemented"
  | Implication (left, right) -> failwith "Not implemented"
  | Conjunction (left, right) -> failwith "Not implemented"
  | Quantifier (q, var, left, right) -> failwith "Not implemented"

Where are we now?

What is our interpreter able to handle so far?

What is our interpreter incapable of handling?

For next Tuesday: read Blackburn and Bos, Ch. 3