Skip to content
  • Home
  • CFP
  • Submission
  • Registration
  • History
    • ICKE 2022
    • ICKE 2021
    • ICKE 2020
    • ICKE 2019
  • Committees
    • Conference Committees
    • Keynote Speakers
  • Conference Info.
    • Conference Venue
    • Conference Program
    • Transportation
    • Visa Application
  • ICKE News
  • Contact us

Keynote Speakers-ICKE 2023

 


Fellow of IEEE, BCS

Prof. Shaoying Liu, Hiroshima University, Japan

 

Title: Animation-Based Approach to Formal Specification Validation (Click for abstract)

Biography: Shaoying Liu is a Professor of Software Engineering at Hiroshima University, Japan, IEEE Fellow, and BCS Fellow. He received the Ph.D in Computer Science from the University of Manchester, U.K in 1992. His research interests include Formal Engineering Methods for Software Development, Specification Verification and Validation, Specification-based Program Inspection, Automatic Specification-based Testing, Testing-Based Formal Verification, and Intelligent Software Engineering Environments. He has published a book entitled "Formal Engineering for Industrial Software Development" with Springer-Verlag, 12 edited conference proceedings, and over 200 academic papers in refereed journals and international conferences. He proposed to use the terminology of "Formal Engineering Methods" in 1997, and has established Formal Engineering Methods as a research area based on his extensive research on the SOFL (Structured Object-Oriented Formal Language) method since 1989, and the development of ICFEM conference series since 1997. In recent years, he has served as the General Chair of ICFEM 2017, the Chair of ICECCS Steering Committee, and a PC member for numerous international conferences. He is an Associate Editor for IEEE Transactions on Reliability and a member of JSSST and IPSJ.

 

Prof. Maiga Chang, Athabasca University, Canada

 

Title: Using Machine Learning Tools in the Cloud: Experience Gained from the Ask4Summary Research Project (Click for abstract)

Biography: Dr. Maiga Chang is a Full Professor in the School of Computing and Information Systems at Athabasca University, Canada. His research mainly focus on game-based learning, training and assessment; learning behaviour analysis; learning analytics and academic analytics; intelligent agent technology; health informatics; data mining; computational intelligence; natural language processing; artificial intelligence; museum education mobile learning and ubiquitous learning; healthcare technology, etc.
Dr. Chang is now Chair (2018~2023) of IEEE Technical Committee of Learning Technology (TCLT, https://tc.computer.org/tclt), Executive Committee member of IEEE Computer Society Special Technical Communities (https://www.computer.org/communities/special-technical-communities), Executive Committee member of Asia-Pacific Society for Computers in Education (2017~2024, APSCE, https://www.apsce.net/) and Global Chinese Society for Computing in Education (2016~2025, GCSCE, http://gcsce.org/), SVice President (2022~) of International Association of Smart Learning Environments (IASLE, http://iasle.net/), and Chair (2021~) of Educational Activities Committee, IEEE Northern Canada Section. Dr. Chang is also a Steering Committee member (2020~) for International Conference on Intelligent Tutoring Systems (ITS, https://its2021.iis-international.org/).
Dr. Chang is editors-in-chief (2019~) of Journal of Educational Technology & Society (https://www.j-ets.net/, Open Access SSCI), editor-in-chief (2014~) of International Journal of Distance Education Technologies (https://igi-global.com/ijdet, Open Access ESCI, SCOPUS, EI), and editor-in-chief (2020~) of Bulletin of Technical Committee on Learning Technology (https://tc.computer.org/tclt/bulletin/, Open Access ESCI).
Dr. Chang has given more than 125 talks and lectures in different events; He has participated in more than 310 international conferences and workshops as a Program Committee Member; and, he also has (co-)authored more than 240 book chapters, journal and international conference papers. He is an IEEE member since 1996 and also a member of ACM (2001-2017), AAAI (since 2001-2017), INNS (2004-2018), and Phi Tau Phi Scholastic Honor Society.

 

Prof. Chai Ching-sing, The Chinese University of Hong Kong, Hong Kong

 

Title: Fostering students’ motivation for AI education (Click for abstract)

Biography: Ching Sing CHAI received his B.A. from the National Taiwan University; PGDE and MA from Nanyang Technological University; and his Ed D from the University of Leicester. He served as a secondary Chinese language teacher and head of department, and as an associate professor in Nanyang Technological University. He is currently a professor in the Chinese University of Hong Kong. He has published more than 100 SSCI papers. His research interests include technological pedagogical content knowledge, language learning, STEM and AI education, and teacher professional development.

 

 

 

 

  • Conference Secretary
  • Ms. Teri Zhang
  • Tel: +86-13290000003 E-mail: ickeconf@126.com
  • 2023 9th International Conference on Knowledge Engineering (ICKE 2023)
  • Fujisawa (Kanagawa), Japan 丨March 18-20, 2023
  • ©ICKE 2023 All rights reserved.

Homepage
Mitsuo Kawato
ATR Computational Neuroscience Labs
Japan
Computational neuroscience for learning from a small sample
Biography
MITSUO KAWATO received a B.S. degree in physics from Osaka University in 1976 and M.E. and Ph.D. degrees in biophysical engineering from Osaka University in 1978 and 1981, respectively. From 1981 to 1988, he was a faculty member and lecturer at Osaka University. From 1988, he was a senior researcher and then a supervisor in the ATR Auditory and Visual Perception Research Laboratories. Since 2003, he has been Director of ATR Computational Neuroscience Laboratories. Since 2004, he has been an ATR Fellow. In 2010, he became Director of ATR Brain Information Communication Research Laboratories. From 1996 to 2001, he served as director of the Kawato Dynamic Brain Project, ERATO, JST. From 2004 to 2009, he served as research Supervisor of the Computational Brain Project, ICORP, JST. From 2008 to 2013 March, he served as a Research Leader of BMI/Group A, SRPBS, Japanese MEXT. In 2008, he was jointly appointed as a Research supervisor of PRESTO, JST. In 2013 November, he was jointly appointed as a Research Leader of BMI Technology/Field 3, SRPBS, Japanese MEXT. In 2014, he was jointly appointed as a manager for Portable BMI on ImPACT Program "Actualizing energetic life by visualizing and controlling Brain Information".

He is now concurrently working as a specially appointed visiting professor of Toyama Prefectural University and a visiting professor at Kanazawa Institute of Technology, Nara Institute of Science and Technology, the National Institute for Physiological Sciences, National Institute of Informatics, and Graduate School of Informatics of Kyoto University, Precision and Intelligence Laboratory of Osaka Institute of Technology, and Tamagawa University Brain Science Institute.

For the last fifteen years he has been working in computational neuroscience and neural network modeling. He published about 250 papers, reviews and books. Research topics include decoded neurofeedback as an experimental tool to manipulate spatiotemporal brain activity patterns, rs-fcMRI based biomarkers of mental disorder, advanced fMRI neurofeedback therapy, simulation study of dendritic spines, feedback-error-learning model and its applications to industrial robot manipulators, movement trajectory formation, bi-directional theory for interactions between cortical areas, cerebellar internal models, and teaching by demonstration for robots.

Mitsuo Kawato
Computational neuroscience for learning from a small sample

 
Abstract
Deep neural networks have been remarkably useful for image classification and phoneme recognition. Combined with reinforcement learning algorithms, deep neural networks have outperformed human experts in simulated video games and the game "Go". To achieve such successes, millions of images, hundreds of millions of phonemes, and tens of millions of games have been utilized as training data sets in the supervised learning or training trials in the reinforcement learning. Meanwhile, in the 2015 DARPA robotics challenge final competition (2015 DARPA Robotics Challenge Finals), many humanoid robots fell while walking on sand, going up stairs, turning bulbs, or getting out of a car. A small number of humanoids completed all the tasks, but they were extremely slower than humans. By age 5, human infants are able to execute all of the above tasks more quickly and reliably than humanoid robots developed by world premier researchers. What could be the reasons of this dramatic contrast between success and failure for simulated versus real-world tasks by artificial intelligence? In the simulated video games and "Go", the degrees of freedom of the controlled system were relatively small, there were no hidden variables, and state transitions were deterministic without noise and perfectly described by simple rules. Thus, the computer simulations were exactly correct without errors. For the final reason, tens of millions of simulated games are generated by software players, and they can be used efficiently for DeepQ learning (a Q-learning algorithm of reinforcement learning combined with deep neural network learning). In contrast, a humanoid robot in the real world is a complicated nonlinear dynamical system with huge degrees of freedom. Indeed, hidden states can be situated far above measured sensory signals and far below issued motor commands. Many physical processes, including contact and friction, are difficult to model. Mainly for the final reason, quantitatively reliable simulations of humanoid robots in real-world environments are extremely difficult even if not impossible. Thus, reinforcement learning in humanoids designed to operate in the real world has been typically conducted using real experimental trials. However, when humanoids fall, they are often damaged such that no further trials can be accumulated before painful, expensive and laborious repairs are made. In artificial intelligence, or more precisely, in neural networks learning and machine learning, it is well established that when a learning system with a fixed degrees of freedom n is utilized, approximately 10n training samples are necessary. If it is possible to conduct tens of millions of learning trials, a large learning system, such as deep neural networks, can be utilized. However, if only 100 trials can be accumulated, only very simple learning systems with ten degrees of freedom should be utilized to avoid over-fitting problems in learning. I postulate that these differences in the number of training samples and consequently resulting allowed degrees of freedom of the control systems readily explain the dramatic contrast between the success of the simulated learning and the failure of the real-world learning mentioned above.
Animal brains are confronted with sensorimotor problems that are much more challenging than those faced by humanoid robots. Animal bodies are flexible and possess an enormous number of muscles, sensors, and motor neurons. Neurons are slow-computing devices with a significant degree of noise. Thus, physical modeling of animal movements is very difficult, as there are many degrees of freedom, hidden variables, a high noise level, and a risk of injury or death in the case of failure. The human brain contains 10 to the 11th neurons and 10 to the 14th synapses. As a learning control system it has enormous degrees of freedom. If we assume that the number of synapses correspond to the degree of freedom of the learning system, and that a single reinforcement learning trial can be obtained within 10 seconds, then it follows that an animal brain will need 10 to the 15th training trials, and thus 10 to the 16th seconds for learning time to avoid over-fitting. This period is much longer than an animal life. In contrast to this estimate, humans learn motor control very quickly. For example, humans can learn new dynamic environment within a few trials. Human infants learn to walk after only several thousands falls. Through computational neuroscience research of sensorimotor learning, I hope to understand a mystery to brake the common sense in artificial intelligence: 10 to the 11th degrees-of-freedom learning system can learn to control an extremely complicated nonlinear dynamical system only after 1,000 failures. Kawato and Samejima (2007) reviewed several computational schemes for enabling efficient reinforcement learning from a small training samples. They include internal models, sparse estimation algorithms, multiple- paired forward and inverse models, and a hierarchical reinforcement learning algorithms. Attention, consciousness, metacognition, and episodic memory are important research topics in cognitive neuroscience, and have recently attracted the interests of artificial intelligence researchers with the hope that they could provide computational mechanisms to decrease high dimensionality of data in learning. They may play essential roles in constructing abstract concepts, dimensions and attributes that are high-level representations necessary in the upper layers of hierarchical reinforcement learning. With respect to reducing the dimensionality of high-dimensional data, electrical synapses that transmit information via gap junctions are attractive elements in neuronal circuits because they tend to synchronize neurons and effectively reduce the degrees of freedom of the circuit.
The cerebellum is important for motor control and motor learning and plays very important roles in multi-joint movements such as walking. The inferior olivary (IO) nucleus sends climbing fiber inputs to Purkinje cell (PC), the only output of all motor coordination in the cerebellar cortex, and possesses the highest density of gap junctions in the mammalian brain. As a good candidate for a neuronal system that plays a central role in motor learning and that may be useful in investigating the above-mentioned disparity between the large degrees of freedom of learning systems and conditions where only a small number of training trials are available, I focus on the olivo-cerebellar system. Of special interest is the network of IO neurons, which may control the degrees of freedom by adjusting their synchronous/asynchronous firing activities to provide an adaptive framework for the learning machinery. In the cerebellar motor learning, it has been known that the IO neurons transmit error signals to the PC, inducing plasticity at the parallel fiber-PC synapses. Recent investigations have also revealed multiple plasticity mechanisms as well as evidence that parallel fiber-evoked simple spikes to PCs contribute to cerebellum-dependent learning to some extent. One dominant view over the last several decades suggests that complex spikes transmitted through the climbing fibers provide instructive signals to the PCs to drive learning. To examine the functions of the IO, computational modeling has been one of the promising driving forces. As the carrier of the teaching signals, the IO has been modeled to provide the climbing fiber inputs in the simulation studies of the cerebellar learning. To explore the IO dynamics in detail, a class of simplified conductance-based models has been developed to reproduce experimental observation of sub- threshold oscillations. Further details of the electrophysiological properties of the IO neurons have been described by multiple compartment models, which have been applied to elucidate experimental observation of the sub-threshold activities, to examine the capability of their information transmission, and to estimate conductance levels of the IO network from experimental data. Owing to the advanced experimental methods as well as the rapid growth in computer power, the computational models have been nowadays utilized for quantitative understanding of the experimentally measured IO dynamics and furthermore for testing hypotheses regarding IO functions. Here, I review recent advances in the computational modeling of the olivo- cerebellar system.

Thierry Denoeux
Classification and clustering using belief functions: a review of new developments

 
Abstract
The Dempster-Shafer theory of belief function is a formal framework for reasoning and computing with uncertainty. A review of different supervised and unsupervised learning algorithms based on belief functions is presented. In distance-based evidential classification, belief functions are constructed based on distances to nearest neighbors or prototypes, and the classifier can be trained by minimizing a cost function. A different approach, based on the Evidential EM algorithm, makes it possible to learn a parametric classifier from imprecise and uncertain data. For clustering, we introduce the notion of evidential partition, which extends hard, fuzzy, possibilistic and rough partitions. In an evidential partition, cluster membership uncertainty is represented by belief functions. We present different algorithms for learning an evidential partition from dissimilarity or attribute data, as well as similarity criteria to compare and evaluate evidential partitions. These algorithms have been coded in the R packages evclass and  evclust publicly available from the Comprehensive R Archive Network (CRAN,  https://cran.r-project.org).
Selected References
C. Lian, S. Ruan and T. Denoeux. Dissimilarity metric learning in the belief function framework. IEEE Transactions on Fuzzy Systems, Vol. 24, Issue 6, pp. 1555-1564, 2016.
T. Denoeux, S. Sriboonchitta and O. Kanjanatarakul. Evidential clustering of large dissimilarity data. Knowledge-Based Systems, vol. 106, pages 179-195, 2016.
T. Denoeux, O. Kanjanatarakul and S. Sriboonchitta. EK-NNclus: a clustering procedure based on the evidential K-nearest neighbor rule. Knowledge-Based Systems, Vol. 88, pages 57–69, 2015.
C. Lian, S. Ruan and T. Denoeux. An evidential classifier based on feature selection and two-step classification strategy. Pattern Recognition, Vol. 48, pages 2318-2327, 2015.
T. Denoeux. Maximum likelihood estimation from Uncertain Data in the Belief Function Framework. IEEE Transactions on Knowledge and Data Engineering, Vol. 25, Issue 1, pages 119-130, 2013.
V. Antoine, B. Quost, M.-H. Masson and T. Denoeux. CECM: Constrained Evidential C-Means algorithm. Computational Statistics and Data Analysis, Vol. 56, Issue 4, pages 894-914, 2012.


Homepage
Thierry Denoeux
Université de Technologie de Compiègne
France
Classification and clustering using belief functions: a review of new developments
Biography
Thierry Denoeux graduated in 1985 as an engineer from the Ecole des Ponts ParisTech in Paris, France, and received a doctorate from the same institution in 1989. Currently, he is Full Professor (Exceptional Class) with the Department of Information Processing Engineering at the Université de Technologie de Compiègne (UTC), France, and Scientific Coordinator of the Laboratory of Excellence "Management of Technological Systems of Systems". His research interests concern the management of uncertainty in intelligent systems. His main contributions are in the theory of belief functions with applications to pattern recognition, data mining and information fusion.  He has published more than 200 journal and conference papers in this area. He is the Editor-in-Chief of the International Journal of Approximate Reasoning, and an Associate Editor of several journals including Fuzzy Sets and Systems and the International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems (IJUFKS).