Inductive Logic Programming (ILP) is a Machine Learning approach with foundations in Logic Programming. The problem specification and the models discovered by ILP systems are both represented as Prolog programs allowing for great expressiveness and flexibility. However, this flexibility comes at a high computational cost and ILP systems are known for their difficulty in scaling-up. Constructing and evaluating complex concepts are two of the main problems that prevent ILP systems from tackling many of the most interesting learning problems. Large concepts cannot be constructed or evaluated simply by parallelizing existing top-down search algorithms or improving the underlying Prolog engine. Novel search strategies and cover algorithms are needed. The main focus of this talk is on how to efficiently construct and evaluate such complex hypotheses in an ILP setting. Namely, we will present an efficient theta-subsumption algorithm that improves over Prolog’s SLD-resolution by several orders of magnitude. We will also show how a new bottom-up search strategy coupled with this efficient subsumption algorithm led to the discovery of a better model for a protein-binding application problem.
Inductive Logic Programming applied to Bioinformatics
May 17, 2011
1:00 pm
José Santos
José Santos has a Ph.D. degree in Computer Science (2010) from Imperial College London. In the Ph.D., he worked on the theory and implementation of Inductive Logic Programming (ILP) systems. ILP is a first-order logic form of Machine Learning. José is now a post-doctoral fellow at the Microsoft Language Development Center where he is working on improving Bing’s query rewriting mechanisms so that the Bing backend may return more relevant documents. José also holds a Licenciatura in Informatics Engineering (2004 FCT-UNL), an MSc in Artificial Intelligence (2006 FCT-UNL) and an MSc in BioInformatics (2007 Imperial College). After graduating in 2004 José worked one year at Novabase Business Intelligence.MLDCSeminários
Últimos seminários
Cost-Sensitive Learning to Defer to Multiple Experts
March 2, 2026Large language models (LLMs) have emerged as strong contenders in machine translation. Yet, they often fall behind specialized neural machine…
Fair Federated Learning under Group-Specific Distributed Concept Drift
February 24, 2026Machine learning models can become unfair when different groups experience changes in data over time, a phenomenon called group-specific concept…
Unlocking Latent Discourse Translation in LLMs Through Quality-Aware Decoding
June 17, 2025Large language models (LLMs) have emerged as strong contenders in machine translation. Yet, they often fall behind specialized neural machine…
Speech as a Biomarker for Disease Detection
May 20, 2025Today’s overburdened health systems face numerous challenges, exacerbated by an aging population. Speech emerges as a ubiquitous biomarker with strong…

