Artificial Intelligence

VIMIAC16  |  Computer Engineering BSc  |  Semester: 5  |  Credit: 5

Objectives, learning outcomes and obtained knowledge

The main objective of the course is to comprehensively introduce the basic concepts and main areas of artificial intelligence. Students first get acquainted with the components of intelligent behavior and then how to express them with computational models. This is followed by an overview of formal and heuristic methods of artificial intelligence, starting from searching in problem space, through knowledge representation and inference, to the implementation of learning in different ways. Students get to know the methods used in practice, their prerequisites and limitations in laboratory exercises.

Lecturers

Hullám Gábor
Gábor Hullám

deputy head of department, associate professor

Course coordinator

Synopsis

Detailed topics of the lectures
  1. Major milestones of artificial intelligence (AI). Definition of intelligence, engineering approach to intelligent behavior. Where is AI now, what are the problems already solved and what are its main challenges right now?
  2. Ethical, legal, social issues of AI. What changes has AI made and will make to people's lives? As an engineer, which ethical principles should be kept in mind when designing AI systems? The essence of the human-centric AI paradigm.
  3. Designing intelligent systems: agent definition, components, environments, architecture and implementation. Search space and relationship between basic agent types. Internal structure and behavior of agents. Problem solving by searching: comprehensive algorithms and basic mathematical abstractions of intelligent systems. Uninformed search algorithms.
  4. Informed search algorithms, heuristics. Search complex environments. How to creatively apply the algorithms we have learned so far to implement intelligent behavior. Constraint satisfaction problems (CSP). The concept of constraints, the propagation of constraints. General heuristic, use of a constraint graph. Common CSP apps.
  5. Adverserial search. Optimal decisions in two- or multi-player games, game theory basics. Minimax algorithm and its extensions. Games with random elements. Evolution of AI methods by solving game problems.
  6. Knowledge as an essential component of intelligence. Formalization of knowledge using logic. Logical operators, inference, proof. Expressive power and properties of judgment logic and first-order logic.
  7. Knowledge engineering, logical description of agents. Problem solving with logical inference. Forward chaining, backward chaining, resolution. Design methods, practical applications. Glossaries, descriptive logics, semantic methods.
  8. Incomplete, uncertain and changing knowledge: dealing with uncertainty with probability theory. Bayesian rule, Bayesian update. Representation of uncertain knowledge using probabilistic networks.
  9. Properties of Bayesian networks. The construction of Bayesian networks, the role of structure and parameterization. Naïve Bayesian networks and their applications. Probabilistic inference in Bayesian networks using exact and approximate methods.
  10. Basic concepts of rationality and utility. Intelligence as the ability to make rational decisions. Utility functions and their properties. Decision nets. 
  11. Issues of sequential decisions. Markov decision processes (MDF), Bellman equation. Methods for solving Markov decision processes can be observed fully and partially. MDF's relationship to reinforcement learning.
  12. Learning as a fundamental mechanism of intelligence. Basic concepts of machine learning. The main branches of machine learning: supervised learning, unsupervised learning, reinforcement learning.  The process of supervised learning, model rating, goodness indicators.
  13. The concept of inductive learning, inductive inference. Hypothesis space, consistent hypothesis, Ockham's razor. Bias - variance compromise, underfit, overfit. Basics of statistical learning.
  14. Optimization techniques, gradient descent, stochastic gradient descent, genetic algorithms.
  15. Supervised learning methods. Regression and classification tasks. Naïve Bayesian classifier. Regression models, univariate and multivariate linear regression, logistic regression. Regularization techniques. 
  16. Learning a decision tree, its properties. Decision tree as a tool for learning logical hypotheses. Entropy and information gain-based approach in decision trees. Pruning and cross-validation techniques.
  17. Ensemble learning. Bagging, stacking, boosting techniques. Random forest, AdaBoost algorithm, gradient boosting.
  18. Basics of neural networks, perceptron model.  Properties, expressive power, teaching of artificial neural networks.
  19. Basics of deep neural networks. The components that enable development are algorithm, architecture and hardware. Breakthroughs and practical applications achieved by deep learning.
  20. Reinforcement learning. The role of reward in learning. Passive reinforcement learning, adaptive dynamic programming, time difference (TD) learning. Active reinforcement learning. Q learning.
  21. The future of machine learning. Human-machine decision-making, machine teaching, artificial general intelligence.
Detailed topics of labs
  1. Use of uninformed and informed search algorithms. Explore common structures and frameworks. Implementation of breadth, depth, uniform cost, greedy and A* search to solve a pathfinding problem.
  2. Adversarial search in a hostile environment. Solving a game task using Minimax algorithm and its extensions. Solve a CSP task with general heuristic.
  3. Representation of uncertain knowledge with Bayesian networks. Knowledge engineering tasks, Bayesian network development, definition of structure and parameterization. Inference with the established model. Extending a probabilistic network into a decision network by adding utility and decision nodes. Rational decision calculation.
  4. Study of regression models. Applications of univariate and multivariate linear regression. Regularization methods. Logistic regression models.
  5. Learning logical hypotheses using decision trees. Steps of learning decision trees, qualification of decision tree model, examination of generalization ability.
  6. Study of ensemble learning, random forest models.
  7. Investigating the operation of neural networks on simple problems. Investigating the effects of parameter settings and sample size.