Call for Work in Progress

We are now calling for extended abstracts on work in progress that may be of interest to the grammatical inference community.

You will get the opportunity to present and discuss your work in progress at ICGI in Rabat, Morocco, July 10-13th.

Important Dates

  • Abstract submission deadline: June 10, 2023
  • Notification of acceptance deadline: June 12, 2023 
  • Conference: July 10-13th, 2023

Grammatical Inference is the research area at the intersection of Machine Learning and Formal Language Theory. Since 1993, the International Conference on Grammatical Inference (ICGI) is the meeting place for presenting, discovering, and discussing the latest research results on the foundations of learning languages, from theoretical and algorithmic perspectives to their applications (natural language or document processing, bioinformatics, model checking and software verification, program synthesis, robotic planning and control, intrusion detection…).

This 16th edition of ICGI will be held in person in Rabat, the modern capital with deep-rooted history of Morocco located on the Atlantic Coast. To celebrate the 30th anniversary of the ICGI conference, the program will include a distinguished lecture by Dana Angluin. The program will also include two invited talks, on recent advances of Grammatical Inference for Natural Language Processing and Bioinformatics by Cyril Allauzen (Google NY) and Ahmed Elnaggar (TU München), a half-day tutorial at the beginning of the conference on formal languages and neural models for learning on sequences by Will Merrill, as well as oral presentations of accepted papers.

The 16th edition of ICGI will also partner with the Transformers+RNN: Algorithms to Yield Simple and Interpretable Representations (TAYSIR) competition, an online challenge on extracting simpler models from already trained neural networks. The conference will include a special session organized by TAYSIR on the presentation of the results of the competition with an opportunity for competitors to present their approach.

Invited Speakers

Submission Instructions for extended abstracts on Work in Progress

We invite extended abstracts on work in progress, which can be either theoretical or experimental, fundamental or application-oriented, solving or proposing important problems.

Each abstract should contain title, authors, affiliation, and at least three keywords describing the contents of the work.

Abstracts must be submitted in pdf format using the JMLR style file for LaTeX. The submission is expected to be between 2 and 4 pages long (not counting the bibliography and appendices).

All abstracts should be submitted by email to icgi@fsr.ac.ma before June 10th (AoE). Note that if you submit the abstract earlier than the deadline, we will be able to send the notification of acceptance earlier (max. 1 week after submission). Acceptance is conditioned on relevance to the topic of ICGI and clarity. 

Topics of Interest

Typical topics of interest include (but are not limited to):

  • Theoretical aspects of grammatical inference: learning paradigms, learnability results, complexity of learning.
  • Learning algorithms for language classes inside and outside the Chomsky hierarchy. Learning tree and graph grammars. 
  • Learning probability distributions over strings, trees or graphs, or transductions thereof.
  • Theoretical and empirical research on query learning, active learning, and other interactive learning paradigms.
  • Theoretical and empirical research on methods using or including, but not limited to, spectral learning, state-merging, distributional learning, statistical relational learning, statistical inference, or Bayesian learning
  • Theoretical analysis of neural network models and their expressiveness through the lens of formal languages.
  • Experimental and theoretical analysis of different approaches to grammar induction, including artificial neural networks, statistical methods, symbolic methods, information-theoretic approaches, minimum description length, complexity-theoretic approaches, heuristic methods, etc.
  • Leveraging formal language tools, models, and theory to improve the explainability, interpretability, or verifiability of neural networks or other black box models.
  • Learning with contextualized data: for instance, Grammatical Inference from strings or trees paired with semantic representations, or learning by situated agents and robots.
  • Novel approaches to grammatical inference: induction by DNA computing or quantum computing, evolutionary approaches, new representation spaces, etc.
  • Successful applications of grammatical learning to tasks in fields including, but not limited to, natural language processing and computational linguistics, model checking and software verification, bioinformatics, robotic planning and control, and pattern recognition.

Conference Chairs

  • François Coste, Inria Rennes, France
  • Faissal Ouardi, Mohammed V University in Rabat, Morocco
  • Guillaume Rabusseau, University of Montreal – Mila, Canada

 

Comments are closed.