Keynote Speakers

  • WCCI Plenary Talks

    • Connecting Computational Intelligence to the Cyber-Physical World

      The development of cyber-physical systems with multiple sensor/actuator components and feedback loops has given rise to advanced automation applications, including energy and power, intelligent transportation, water systems, manufacturing, etc. Traditionally, feedback control has focused on enhancing the tracking and robustness performance of the closed-loop system; however, as cyber-physical systems become more complex and interconnected and more interdependent, there is a need to refocus our attention not only on performance but also on the resilience of cyber-physical
      systems. In situations of unexpected events and faults, computational intelligence can play a key role in improving the fault tolerance of cyber-physical systems and preventing serious degradation or a catastrophic system failure. The goal of this presentation is to provide insight into the design and analysis of intelligent monitoring methods for cyber-physical systems, which will ultimately lead to more resilient societies.

    • Can intelligent systems be conscious?

      The concept of consciousness is complex and takes various forms. The fact that an intelligent system can be conscious has long been discussed and the questions are getting louder as we see systems springing up everywhere that seem capable of dialoguing with humans in a very natural way.


      We propose to look at several facets of consciousness, from phenomenological consciousness linked to perceptions to access consciousness, which gives us information about one’s actions. In 1982 already, Marvin Minsky 1 was considering that self-conscious systems could be done
      by providing machines with ways to examine their own mechanisms while they are working. Then Jacques Pitrat 2 in 2009 claimed that, for a conscious artificial being, the possibility of monitoring its own thought enables it to explain its decisions so that they can be accepted by others, which goes in the direction of eXplainable AI. A recent study 3 provides a list of indicator properties derived from scientific theories to assess consciousness for an intelligent system. We offer an overview of some interesting aspects of consciousness from the angle of
      intelligent systems, which can be different from human consciousness, and we wonder to what extent a present or a future system can have such a form of consciousness and what the advantages and drawbacks are.


      1 Marvin Minsky – Why people think computers can’t, AI Magazine volume 3 number 4, pp. 3-
      15, 1982
      2 Jacques Pitrat - Artificial Beings. The conscience of a conscious machine. ISTE and Wiley.
      2009.
      3 Butlin, P. et al. Consciousness in Artificial Intelligence:Insights from the Science of
      Consciousness, Preprint at https://arxiv.org/abs/2308.08708 (2023)

    • Accelerating Science Discovery - High Performance Simulation , Math and AI

      Modern scientific discovery relies on advances in data science, mathematics, and artificial intelligence (AI). The combination of these disciplines has led to significant breakthroughs in various fields, including materials science, drug discovery, and chip design. This talk discusses the role of AI-enriched simulation in accelerating science discovery and the use of high-performance computing, math, and AI to drive innovation.

      Key aspects of AI-enriched simulation include:

      Accelerating the discovery process: AI-enriched simulation uses AI to identify the most promising simulations to run on a massive dataset, reducing the computational expense and saving precious time and resources.

      Automating complex simulations: AI-enriched simulation makes complex, predictive simulations automatable and user-friendly for researchers without deep computational expertise, removing a critical research bottleneck

      Reducing the number of simulations needed: By using AI to analyze data and determine the most promising simulations, AI-enriched simulation can speed up screening by factors of 10-100 times.

      Leveraging AI and machine learning: AI-assisted simulations use neural networks and machine learning algorithms to predict complex properties of materials and other systems, bypassing expensive physics-based routines and accelerating the discovery process.

      Collaborative research: AI expertise, such as that found at Berkeley Lab, can be combined with traditional research methods to apply AI to various scientific problems, leading to innovative solutions and new discoveries.

      In summary, the future of scientific discovery lies in the integration of high-performance simulation, math, and AI. By harnessing the power of these technologies, researchers can accelerate the discovery process, automate complex simulations, and unlock new possibilities in various fields.

       

    • Utilization of large-scale brain image database for digitalization of psychiatric and neurological disorders

      In recent years, neuroimaging databases for psychiatric and neurological disorders have enabled users to find common and disease-specific features and redefine disease spectra using data-driven approaches. In the Brain/MINDs beyond (2018-2023), the neuroimaging database projects have established the multiple sites, multiple disorders MRI database.

    • Multiobjective evolutionary optimization in space engineering and spin-off to industry

      Multiobjective evolutionary computation (MOEC) is getting popular in Japan because it has various advantages such as capability of finding wide variety of Pareto-optimal designs. In Japan Aerospace Exploration Agency (JAXA), I have been engaged in multiobjective design optimizations in space engineering such as rocket engine turbopump design, spacecraft trajectory design, reusable space transportation system design, spacecraft landing system design, selection of Moon landing site. In this talk, I will introduce some examples of these applications of MOEC in JAXA.

  • IJCNN Keynote Talks

    • Least Squares Support Vector Machines and Deep Learning

      While powerful architectures have been proposed in deep learning, with support vector machines and kernel-based methods solid foundations have been obtained from the perspective of statistical learning theory and optimization. Simple core models were obtained within the least squares support vector machines framework, related to classification, regression, kernel principal component analysis, kernel canonical correlation analysis, kernel spectral clustering, recurrent models, approximate solutions to partial differential equations and optimal control problems, etc. The representations of the models are understood in terms of primal and dual representations, respectively related to feature maps and kernels. The insights have been exploited for tailoring representations to given data characteristics, both for high dimensional input data and large scale data sets. One can either work with explicit feature maps (such as e.g. convolutional feature maps) or implicit feature maps through the kernel functions. 

    • Towards More Robust and Reliable Machine Learning

      In statistical machine learning, training data is often full of uncertainties due to insufficient information, label noise, and bias. In this talk, I will give an overview of our research on reliable machine learning, including weakly supervised learning, noise-robust learning, and transfer learning. Then, I will discuss our recent challenges to integrate these approaches and develop a generic machine learning methodology with fewer modeling assumptions.

    • Learning from Data in post-Foundation Models Era: bringing learning and reasoning together

      Deep Learning continues to attract the attention and interest not only of the wider scientific community and industry, but also society and policy makers. Fuelled by the remarkable generalisation and separability capabilities offered by the transformers (e.g. ViT), Foundation Models (FM) offer unparalleled feature extraction opportunities. However, the mainstream approach of end-to-end iterative training of a hyper-parametric, cumbersome, and opaque model architecture led some authors to brand them “black box”. This degrades their generalisation, requires many labelled data, compute power and related energy, etc. costs. Cases were reported when such models can give wrong predictions with high confidence - something that jeopardises the safety and trust. Deep Learning is focused on accuracy and overlooks explainability and the semantic meaning of the internal model representations, reasoning and its link with the problem domain. In fact, it shortcuts from the large amount of (labelled) data to the predictions bypassing and substituting the causality with correlation and error minimisation. It relies on assumptions about the data distributions that are often not satisfied and suffers from catastrophic forgetting when faced with continual and open set learning. Once trained, such models are inflexible to new knowledge. They are good only for what they were originally trained for. Indeed, the ability to detect unseen and unexpected and start learning this new class/es in real time with no or very little supervision (zero- or few- shot learning) is critically important but is still an open problem. The challenge is to fill the gap between the high levels of accuracy and the semantically meaningful solutions. 
      This talk will focus on “getting the best from both worlds”: the powerful latent feature spaces formed by pre-trained deep architectures such as transformers combined with the interpretable-by-design (in lingui-stic, visual, semantic, and similarity-based form) models. One can see this as a fully interpretable frontend and a powerful backend working in harmony. Examples will be demonstrated from the latest projects from the area of autonomous driving, Earth Observation, health and a set of well-known benchmarks.       
       

    • Predictive Processing: Illuminating and Modeling Cognitive Development

      Cognitive development is an intricate and multifaceted process that has captivated researchers for decades. Human abilities related to perception and action continually evolve during development, exhibiting remarkable diversity among individuals.

    • Keras, A shortcut to master AI

      Discover the transformative capabilities of the Keras 3 API. Delve into deep learning best practices, where you'll gain insights into crafting uncomplicated models and executing them with your preferred backend—be it PyTorch, TensorFlow, or JAX. Explore the dynamic potentials of KerasNLP and KerasCV modules, unveiling the art of constructing powerful AI applications. Witness the seamless creation of generative image and language models, empowering you to achieve remarkable feats with just a few lines of code.

  • FUZZ-IEEE Keynote Talks

    • When There Is Little Data Can AI Still Work? – Approximate Reasoning with Knowledge Interpolation and its Applications

      AI is on the brink of revolutionising industries globally, having made significant advancements in recent years. These achievements are primarily attributed to the use of deep learning techniques that process vast amounts of data. Yet, a pivotal question emerges when faced with limited data for a new problem, especially if this data is ambiguously characterised. Can AI maintain its efficacy under these constraints? This presentation delves into contributions addressing this query, highlighting how fuzzy rule interpolation (FRI) enables approximate reasoning in situations marked by sparse or incomplete knowledge.

    • Fuzzy Systems to Support Safe and Trustworthy Artificial Intelligence

      Artificial Intelligence (AI) has matured as a technology, AI has quietly entered our lives, and it has taken a giant leap in the last year. Image generative AI models or the latest evolutions of large language models have meant that AI has gone, in just a few 
      months, practically from science fiction to being an essential part of the daily lives of hundreds of millions of people around the world. 

    • AI Ethics – Challenges and Opportunities

      Artificial intelligence (AI) has entered an increasing number of different domains. A growing number of people – in the general public as well as in research – have started to consider a number of potential ethical challenges and legal issues related to the development and use of AI technologies. This keynote will give an overview of the most commonly expressed ethical challenges and ways being undertaken to reduce their negative impact. 

    • Large Language models: contextual knowledge matters.

      The last few years have witnessed an increasing development of generative AI and its applications, which culminated in the large-scale sharing of ChatGPT on the Web, with its related potentials, risks and limitations. Large Language Models are one of the possible technologies at the basis of generative AI; they are nowadays successfully applied to a variety of NLP tasks, among which are machine translation, conversational agents, and several others. Despite this, LLMs are affected by some limitations, among which a lack in accounting for contextual knowledge related to the task at hand. A research trend is to inject such knowledge (in-context) into LLMs via prompting techniques. A more recent and promising research direction is to make use of neuro-symbolic approaches, to better model and control the process. In this talk, after a short introduction to LLMs, I will present some possible approaches finalized to this latter aim. I will also present the research issue of defining personal language models, i.e. LLMs tailored on the language of specific users or groups of users.

       

    • Fuzzy Machine learning

      The talk will present the concepts, methodologies, and algorithms of fuzzy machine learning, including fuzzy transfer learning, fuzzy concept drift detection and adaptation, and fuzzy recommender systems. It will also present how the fuzzy machine learning techniques can effectively support data-driven prediction and decision-making in uncertain, complex, and dynamic situations.

  • CEC Keynote Talks

    • Multifactorial Evolutionary Computation with Applications in Machine Learning and Scientific Discovery

      The human mind demonstrates an exceptional capacity to manage multiple tasks seemingly simultaneously while also exhibiting the ability to leverage knowledge acquired from solving one task and apply it to different yet related challenges. Given the exploding volume and variety of information streams, the opportunity, tendency, and (even) the need to address different tasks in quick succession is unprecedented. Yet, the design of population-based algorithms of evolutionary computation (EC) has traditionally focused on addressing a singular task (or problem) at a time. It is only recently that the idea of multifactorial evolution has come to the fore, leading to the growing popularity of transfer and multitask EC. The nomenclature signifies a search involving multiple optimization tasks, with each task contributing a unique factor influencing the evolution of a population of candidate solutions. The multifactorial evolutionary algorithm (MFEA) is distinguished by implicit genetic transfers between tasks, promising free lunches in optimization by reusing knowledge from related problems. The method makes possible the rapid discovery of diverse, high quality outcomes, and potentially out-of-the-box solutions through inter-task genetic crossovers. In this talk, some of the latest algorithmic advances of MFEAs shall be presented, encompassing both single-objective and multiobjective variants. The impact potential of algorithms designed to leverage multiple related tasks shall be showcased in the field of machine learning (through the creation of diverse sets of small but specialized models extracted from large pre-trained architectures) and in AI for scientific discovery (by facilitating fast simulations of multiple instantiations of the fundamental laws of nature). Multiobjective multitasking as a means to arrive at sets of Pareto optimal solution sets in other application domains shall also be highlighted.

    • Evolutionary Machine Learning: 50 Years of Progress

      Evolutionary machine learning have been very popular over the recent years. In this talk, I will firstly provide a brief overview of the history of evolutionary machine learning with the major developments over the past 50 years, then focus on the main paradigms of evolutionary machine learning and their successes in classification, feature selection, regression, clustering, computer vision and image analysis, scheduling and combinatorial optimisation, deep learning, transfer learning and explainable/interpretable machine learning. The main applications, challenges and lessons as well as potential opportunities will be also discussed. 

    • Trust in Optimization Algorithms – The End User Perspective

      Evolutionary Algorithms have a potentially wide-spread usage. They can deal with various types of design parameters, constraints and objectives; non-linear, discontinuous, noisy fitness landscapes and many, even conflicting objectives can be handled. There are numerous open-source software packages for quickly applying EA methods on various problems. In practice, however, EAs are not used as frequently as we would hope. In this talk I would like to provide some insights from industrial projects and focus especially on the perspective of the end user. I will argue that hot topics in ML like trust, transparency and explainability, also need to be considered in Computational Intelligence. 

    • Challenges in Data-Driven Evolutionary Optimization

      Many real-world problems that are optimized based on data collected from historical records, numerical simulations, or physical experiments are called data-driven optimization problems. The interdisciplinary research area of data-driven evolutionary optimization involves techniques in data science, machine learning, and evolutionary algorithms. In an evolutionary data-driven optimization framework, data will be collected at first. Then, surrogate models, which are machine learning models, are built from the data to approximate the real objective functions and / or constraint functions. Given the approximated objective or constraint functions, evolutionary algorithms can then be applied to perform optimization. This talk will highlight the current challenges of data-driven evolutionary optimization based on the view of real-world applications. Also, the techniques to address those challenges will be introduced.

    • Designing and playing games with computational intelligence

      Games provide an ideal playground for AI researchers to study, explore, evaluate, and experiment with different ideas in a controllable and safe environment. As an important application and product, games also involve complex decision-making and creative design tasks. Games have played important roles in the development of computational intelligence, while different computational intelligence methods have been widely applied to playing and designing games. In this talk, I will show how different computational intelligence methods (e.g., generative models, reinforcement learning and evolutionary computation) could be harnessed to procedurally generate new game contents, from game levels to accompanying music that correlates with game difficulties. In addition, I will also show how novel computational intelligence techniques, especially evolutionary reinforcement learning, could be used to play a range of different games. I will conclude the talk by discussing current challenges and potential research directions.